Virtual Worlds for the Perception and Control of Self-Driving Vehicles

Similar documents
Machine Learning for Intelligent Transportation Systems

Autonomous driving made safe

EVALUATION OF A TRAFFIC SIGN DETECTOR BY SYNTHETIC IMAGE DATA FOR ADVANCED DRIVER ASSISTANCE SYSTEMS

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

Domain Adaptation & Transfer: All You Need to Use Simulation for Real

The Virtues of Virtual Reality Artur Filipowicz and Nayan Bhat Princeton University May 18th, 2017

arxiv: v1 [cs.lg] 10 Nov 2017

Spring 2018 CS543 / ECE549 Computer Vision. Course webpage URL:

Digital image processing vs. computer vision Higher-level anchoring

CS 131 Lecture 1: Course introduction

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

Physics Based Sensor simulation

VSI Labs The Build Up of Automated Driving

12th AUGUST WORKSHOP COMPUTER GRAPHICS FOR AUTONOMOUS DRIVING APPLICATIONS EXTENDED PROGRAM

Roles of Artificial Intelligence and Machine Learning in Future Mobility

Semantic Segmentation on Resource Constrained Devices

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles

Lecture 1 Introduction to Computer Vision. Lin ZHANG, PhD School of Software Engineering, Tongji University Spring 2015

Computer vision, wearable computing and the future of transportation

Service Robots in an Intelligent House

The 3xD Simulator for Intelligent Vehicles Professor Paul Jennings. 20 th October 2016

OPEN CV BASED AUTONOMOUS RC-CAR

Today I t n d ro ucti tion to computer vision Course overview Course requirements

Lecture 7: Scene Text Detection and Recognition. Dr. Cong Yao Megvii (Face++) Researcher

Data-Starved Artificial Intelligence

WHO. 6 staff people. Tel: / Fax: Website: vision.unipv.it

A.I in Automotive? Why and When.

Automatic understanding of the visual world

Derek Allman a, Austin Reiter b, and Muyinatu Bell a,c

Value-added Applications with Deep Learning. src:

Mobile Cognitive Indoor Assistive Navigation for the Visually Impaired

Autonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)

Intelligent Driving Agents

Driving Using End-to-End Deep Learning

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION

HeroX - Untethered VR Training in Sync'ed Physical Spaces

Topic identification through sentiment analysis

EMPOWERING THE CONNECTED FIELD FORCE WORKER WITH ADVANCED ANALYTICS MATTHEW SHORT ACCENTURE LABS

Industrial Keynotes. 06/09/2018 Juan-Les-Pins

Intelligent Technology for More Advanced Autonomous Driving

Improving Robustness of Semantic Segmentation Models with Style Normalization

Automotive Applications ofartificial Intelligence

Wildlife Census via LSH-based animal tracking APOORV PATWARDHAN

Wadehra Kartik, Kathpalia Mukul, Bahl Vasudha, International Journal of Advance Research, Ideas and Innovations in Technology

Artificial Intelligence Machine learning and Deep Learning: Trends and Tools. Dr. Shaona

A NEW NEUROMORPHIC STRATEGY FOR THE FUTURE OF VISION FOR MACHINES June Xavier Lagorce Head of Computer Vision & Systems

THE FUTURE OF AUTOMOTIVE - AUGMENTED REALITY VERSUS AUTONOMOUS VEHICLES

Introduction. Visual data acquisition devices. The goal of computer vision. The goal of computer vision. Vision as measurement device

Colorful Image Colorizations Supplementary Material

Robust Positioning for Urban Traffic

Light-Field Database Creation and Depth Estimation

Joint Open Lab and PHD proposal

Perception platform and fusion modules results. Angelos Amditis - ICCS and Lali Ghosh - DEL interactive final event

Demystifying Machine Learning

Human-Centric Trusted AI for Data-Driven Economy

CAPACITIES FOR TECHNOLOGY TRANSFER

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

CS6700: The Emergence of Intelligent Machines. Prof. Carla Gomes Prof. Bart Selman Cornell University

Lecture 1 Introduction to Computer Vision. Lin ZHANG, PhD School of Software Engineering, Tongji University Spring 2018

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

Transer Learning : Super Intelligence

Addressing the Uncertainties in Autonomous Driving

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. ECE 289G: Paper Presentation #3 Philipp Gysel

THE VISIONLAB TEAM engineers - 1 physicist. Feasibility study and prototyping Hardware benchmarking Open and closed source libraries

interactive IP: Perception platform and modules

Unlock the power of location. Gjermund Jakobsen ITS Konferansen 2017

David Howarth. Business Development Manager Americas

March 10, Greenbelt Road, Suite 400, Greenbelt, MD Tel: (301) Fax: (301)

CSE Tue 10/09. Nadir Weibel

Virtual Environments and Game AI

CS343 Introduction to Artificial Intelligence Spring 2010

Virtual Testing of Autonomous Vehicles

Transformation to Artificial Intelligence with MATLAB Roy Lurie, PhD Vice President of Engineering MATLAB Products

Embedding Artificial Intelligence into Our Lives

Following Dirt Roads at Night-Time

Automatic Licenses Plate Recognition System

Analysis and retrieval of events/actions and workflows in video streams

Jurnal TICOM Vol.1 No.1 September 2012 ISSN

The next level of intelligence: Artificial Intelligence. Innovation Day USA 2017 Princeton, March 27, 2017 Michael May, Siemens Corporate Technology

Author s Name Name of the Paper Session. DYNAMIC POSITIONING CONFERENCE October 10-11, 2017 SENSORS SESSION. Sensing Autonomy.

Lecture 23 Deep Learning: Segmentation

CS343 Introduction to Artificial Intelligence Spring 2012

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

Lecture 1 Introduction to Computer Vision. Lin ZHANG, PhD School of Software Engineering, Tongji University Spring 2014

Autonomous Driving with a Simulation Trained Convolutional Neural Network

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

The Denali-MC HDR ISP Backgrounder

Continuous Gesture Recognition Fact Sheet

Event-based Algorithms for Robust and High-speed Robotics

Visualization of Vehicular Traffic in Augmented Reality for Improved Planning and Analysis of Road Construction Projects

Semantic Localization of Indoor Places. Lukas Kuster

ARGUING THE SAFETY OF MACHINE LEARNING FOR HIGHLY AUTOMATED DRIVING USING ASSURANCE CASES LYDIA GAUERHOF BOSCH CORPORATE RESEARCH

TECHNOLOGY DEVELOPMENT AREAS IN AAWA

IRI-UPC Internship Programme 2019

A Winning Combination

FLASH LiDAR KEY BENEFITS

Vision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab

INTELLIGENT UNMANNED GROUND VEHICLES Autonomous Navigation Research at Carnegie Mellon

Gesture Recognition with Real World Environment using Kinect: A Review

Transcription:

Virtual Worlds for the Perception and Control of Self-Driving Vehicles Dr. Antonio M. López antonio@cvc.uab.es

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

Our Mission as CVC/UAB group 4 Forming students (undergraduate, master and PhD) in the fields of Computer Vision, Machine Learning, and Artificial Intelligence for Autonomous Systems, in particular, Cars. Basic Research producing high impact papers in top-level conferences and Q1 journals. Technological transfer & Innovation developing prototypes, demonstrators and products jointly with the industry. Dissemination doing an effort to bring our research and its applications to the general public. antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 5

Research: ML for Vision 6 I m bored, let s labelling data for fun! antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 7

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 9

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 10

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 11

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 12

Semantic segmentation results The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes, G Ros, L. Sellart, J. Materzynska, D. Vázquez, A.M. López, CVPR 2016 13 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Data Publicly Released at 14 www.synthia-dataset.net Image generator to acquire thousands of data with several kinds of ground truth. RGB & Per pixel: depth, semantic class (CamVid), instance ID We simulated different weather and illumination conditions, as well as four seasons We simulated a camera setting for covering 360º >300,000 images with their ground truth available antonio@cvc.uab.es // www.cvc.uab.es/~antonio

DPM to assess Photo-Realism 15 vehicle detection SYNTHIA GTA-V antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Back to DPM to assess Photo-Realism: 16 vehicle detection From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example, A.M. López, J. Xu, J.L. Gómez, D. Vázquez, G. Ros, arxiv:1612.09134 To appear in Domain Adaptation in Computer Vision Applications, Springer Series: Advances in Computer Vision and Pattern Recognition, Edited by Gabriela Csurka antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 17

Change Detection 18 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Summary of the Research 19 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 21

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 22

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 23

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 24

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 25

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 26

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 27

Best Industrial Paper at BMVC 17 28 slope horizon Stereo + Horizon Line + Road Slope Stereo Images Semantic Stixels Semantic segmentation antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Original Stixels 29 Slanted Stixels New dataset: SYNTHIA-San Francisco publicly available soon antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 31

Adding 360º LIDAR with Semantics 32 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 33

Image-to-Image Domain Adaptation: case study on traffic sign recognition Assumptions: 34 1) Real world: missing classes 2) Virtual world: easy to generate samples of any class Real world data Proposal: Virtual world data We want: A new real-world classifier that takes into account the missing classes, but with minimum annotation effort. 1) Train a deep network that knows to transform the virtual images to look like the real ones, using only the intersection classes for training this network. 2) Use the virtual world to generate many examples of the missing classes. 3) Transform the virtual samples according to the learned network. Known classes 4) Train the real-world classifier using the real-world samples (of previous classes) and the transformed samples (of new classes). antonio@cvc.uab.es // www.cvc.uab.es/~antonio New classes

Image-to-Image Domain Adaptation: case study on traffic sign recognition 35 174 Traffic signs types (the ones of Tsinghua dataset). ~ 260000 images generated per day. We force variability: light, background, viewpoint, etc. It is simple to add new traffic signs types. antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 36

Image-to-Image Domain Adaptation: case study on traffic sign recognition 37 S T S T Known Classes S T S T antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Image-to-Image Domain Adaptation: case study on traffic sign recognition 38 New Classes antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Physics-based Rendering in SYNTHIA 39 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Video Analytics towards Vision Zero Project City of Bellevue, Washington, USA 40 Video Analytics towards Vision Zero, Franz Loewenherz, Victor Bahl, Yinhai Wang, ITE Journal, Vol. 87, n. 3, March 2017. Keys: Analytics at intersections. Training of neural networks required. Crowdsourcing of volunteers for collecting ground truth data. Unity & CVC/UAB Rendered data antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 41

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 42

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 43

Augmented Reality 44 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Augmented Reality 45 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Augmented Reality 46 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Augmented Reality 47 antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

49 More photo-realism and ground truth: New datasets Vision Zero project antonio@cvc.uab.es // www.cvc.uab.es/~antonio

50 More photo-realism and ground truth: New datasets Vision Zero project Car Learning to Act: Interactive simulator Open-source spirit antonio@cvc.uab.es // www.cvc.uab.es/~antonio

51 Server Physic simulations Rendering Ground truth Privileged information Client Data recording Environment settings control Vehicle control AI antonio@cvc.uab.es // www.cvc.uab.es/~antonio

52 Features So far two towns from the scratch Different weather/daytime conditions Sets of cameras attached to the vehicle Depth, semantic classes, 3D bounding boxes Speed, traffic infractions, collisions Synch / Asynch modes Based on own assets or free available ones We will open source our C++ code Publicly available soon antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 53

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 54

55 www.carla.org We compare: (1) Modular pipeline; (2) Imitation learning; (3) Reinforcement learning antonio@cvc.uab.es // www.cvc.uab.es/~antonio

56 Conditional Imitation Learning antonio@cvc.uab.es // www.cvc.uab.es/~antonio

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 57

antonio@cvc.uab.es // www.cvc.uab.es/~antonio 58

Index Context SYNTHIA: CVPR 16 SYNTHIA: Reloaded SYNTHIA: Evolutions CARLA Conclusions

60 Simulation of perception and control methods are essential for designing, training and testing AI drivers; both datasets and interactive simulations are key, as SYNTHIA and CARLA. Virtual- to real-world domain adaptation is an essential topic, both for pure perception and for sensorimotor models. SYNTHIA: generating more photorealistic datasets and eventually training deep networks to control the parameters of the image generation (render and composition, augmented reality). CARLA: add more sensors and content, as well as external interaction models. antonio@cvc.uab.es // www.cvc.uab.es/~antonio

Many thanks for attending!!! Many thanks to the many people of the CVC/UAB that has been contributing to this work, especially to Jose A., Felipe, Marc, Fran, Xisco, Néstor, Fran2, Alberto, Iris, Mario, Ignazio, Juan, Daniel, Laura, Juan Carlos, Toni, David, etc. etc. As well as to people from different companies I cannot name (confidentiality), and others I can name: Vladlen, Alexey, Germán, JoseD, Diana, Renaldas, Uwe, David, Dough, etc. etc.