PerSEE: a Central Sensors Fusion Electronic Control Unit for the development of perception-based ADAS

Similar documents
23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

Intelligent driving TH« TNO I Innovation for live

Inter- and Intra-Vehicle Communications

interactive IP: Perception platform and modules

Volkswagen Group: Leveraging VIRES VTD to Design a Cooperative Driver Assistance System

Virtual Homologation of Software- Intensive Safety Systems: From ESC to Automated Driving

Deliverable D1.6 Initial System Specifications Executive Summary

Physics Based Sensor simulation

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles

GNSS in Autonomous Vehicles MM Vision

Video Injection Methods in a Real-world Vehicle for Increasing Test Efficiency

Final Report Non Hit Car And Truck

A Winning Combination

ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

VSI Labs The Build Up of Automated Driving

Perception platform and fusion modules results. Angelos Amditis - ICCS and Lali Ghosh - DEL interactive final event

David Howarth. Business Development Manager Americas

TECHNOLOGY DEVELOPMENT AREAS IN AAWA

Intelligent Technology for More Advanced Autonomous Driving

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015

An Information Fusion Method for Vehicle Positioning System

Method and Tools Specifications

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving

Distributed Simulation Architecture for the Design of Cooperative ADAS

MOBILITY RESEARCH NEEDS FROM THE GOVERNMENT PERSPECTIVE

Vehicle Hardware-In-the-Loop System for ADAS Virtual Testing

NAV CAR Lane-sensitive positioning and navigation for innovative ITS services AMAA, May 31 st, 2012 E. Schoitsch, E. Althammer, R.

Connected Car Networking

Author s Name Name of the Paper Session. DYNAMIC POSITIONING CONFERENCE October 10-11, 2017 SENSORS SESSION. Sensing Autonomy.

ITS radiocommunications toward automated driving systems in Japan

HIGHTS: towards sub-meter positioning accuracy in vehicular networks. Jérôme Härri (EURECOM) on Behalf of HIGHTS ETSI ITS Workshop March 6-8, 2018

Development of a 24 GHz Band Peripheral Monitoring Radar

White paper on CAR28T millimeter wave radar

A SERVICE-ORIENTED SYSTEM ARCHITECTURE FOR THE HUMAN CENTERED DESIGN OF INTELLIGENT TRANSPORTATION SYSTEMS

Designing A Human Vehicle Interface For An Intelligent Community Vehicle

Unlock the power of location. Gjermund Jakobsen ITS Konferansen 2017

William Milam Ford Motor Co

Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications

ARCHITECTURE AND MODEL OF DATA INTEGRATION BETWEEN MANAGEMENT SYSTEMS AND AGRICULTURAL MACHINES FOR PRECISION AGRICULTURE

Positioning Challenges in Cooperative Vehicular Safety Systems

Significant Reduction of Validation Efforts for Dynamic Light Functions with FMI for Multi-Domain Integration and Test Platforms

Qosmotec. Software Solutions GmbH. Technical Overview. QPER C2X - Car-to-X Signal Strength Emulator and HiL Test Bench. Page 1

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

Situational Awareness A Missing DP Sensor output

Autonomous driving made safe

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM

Virtual Testing of Autonomous Vehicles

A SYSTEM FOR VEHICLE DATA PROCESSING TO DETECT SPATIOTEMPORAL CONGESTED PATTERNS: THE SIMTD-APPROACH

Design of an Instrumented Vehicle Test Bed for Developing a Human Centered Driver Support System

DENSO

Roadside Range Sensors for Intersection Decision Support

Digital Engines for Smart and Connected Cars By Bob O Donnell, TECHnalysis Research Chief Analyst

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc.

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Minimizing Distraction While Adding Features

Real-Time Testing Made Easy with Simulink Real-Time

COST Action: TU1302 Action Title: Satellite Positioning Performance Assessment for Road Transport SaPPART. STSM Scientific Report

[Overview of the Consolidated Financial Results]

Cognitive Connected Vehicle Information System Design Requirement for Safety: Role of Bayesian Artificial Intelligence

Vision with Precision Webinar Series Augmented & Virtual Reality Aaron Behman, Xilinx Mark Beccue, Tractica. Copyright 2016 Xilinx

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Robust Positioning for Urban Traffic

Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

Vehicle-in-the-loop: Innovative Testing Method for Cognitive Vehicles

The GATEway Project London s Autonomous Push

Silicon radars and smart algorithms - disruptive innovation in perceptive IoT systems Andy Dewilde PUBLIC

Tsuyoshi Sato PIONEER CORPORATION July 6, 2017

Vision-based Localization and Mapping with Heterogeneous Teams of Ground and Micro Flying Robots

TRB Workshop on the Future of Road Vehicle Automation

Traffic Management for Smart Cities TNK115 SMART CITIES

Resilient and Accurate Autonomous Vehicle Navigation via Signals of Opportunity

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles

Current Technologies in Vehicular Communications

Fusion in EU projects and the Perception Approach. Dr. Angelos Amditis interactive Summer School 4-6 July, 2012

Driver Assistance Systems (DAS)

Semi-Autonomous Parking for Enhanced Safety and Efficiency

An Architecture for Intelligent Automotive Collision Avoidance Systems

HAVEit Highly Automated Vehicles for Intelligent Transport

The Building Blocks of Autonomous Control. Phil Magney, Founder & Principal Advisor July 2016

Using FMI/ SSP for Development of Autonomous Driving

Range Sensing strategies

White paper on CAR150 millimeter wave radar

Autonomous Automation: How do we get to a Million Miles of testing?

Combining ROS and AI for fail-operational automated driving

II. ADVANTAGES AND DISADVANTAGES

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Driver Assistance and Awareness Applications

Distributed Virtual Environments!

RoboCup. Presented by Shane Murphy April 24, 2003

Autonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)

Honda R&D Americas, Inc.

Building a Computer Vision Research Vehicle with ROS

A NEW NEUROMORPHIC STRATEGY FOR THE FUTURE OF VISION FOR MACHINES June Xavier Lagorce Head of Computer Vision & Systems

Making Vehicles Smarter and Safer with Diode Laser-Based 3D Sensing

Advances in Radio Science

Visione per il veicolo Paolo Medici 2017/ Visual Perception

Minnesota Department of Transportation Rural Intersection Conflict Warning System (RICWS) Reliability Evaluation

Transcription:

10-4 MVA2015 IAPR International Conference on Machine Vision Applications, May 18-22, 2015, Tokyo, JAPAN PerSEE: a Central Sensors Fusion Electronic Control Unit for the development of perception-based ADAS Dominique Gruyer IFSTTAR - COSYS/LIVIC French Institute of Science and Technology for Transport, development and network Component and Systems Department / Laboratory on Vehicle-Infrastructure-Driver Interactions 77 rue des chantiers - 78000 Versailles dominique.gruyer@ifsttar.fr Rachid Belaroussi rachid.belaroussi@gmail.com Benoit Lusetti benoit.lusetti@ifsttar.fr Abstract Sebastien Glaser sebastien.glaser@ifsttar.fr Xuanpeng Li xuanpeng.li@ifsttar.fr Marc Revilloud marc.revilloud@ifsttar.fr Automated vehicles and Advanced Driver Assistance Systems (ADAS) face a variety of complex situations that are dealt with numerous sensors for the perception of the local driving area. Going forward, we see an increasing use of multiple, different sensors inputs with radar, camera and inertial measurement the most common sensor types. Each system has its own purpose and either displays information or performs an activity without consideration for any other ADAS systems, which does not make the best use of the systems. This paper presents an embedded real-time system to combine the attributes of obstacles, roadway and egovehicle features in order to build a collaborative local map. This embedded architecture is called PerSEE: a library of vision-based state-of-the-art algorithms was implemented and distributed in processors of a main fusion electronic board and on smart-cameras board. The embedded hardware architecture of the full PerSEE platform is detailed, with block diagrams to illustrate the partition of the algorithm on the different processors and electronic boards. The communications interfaces as well as the development environment are described. 1 Introduction Car safety ratings (NCAP, NHTSA), and consumer safety awareness drive the rapid growth of the Advanced Driving Assistance Systems (ADAS) market. From lane departure warnings, lane change assist, collision avoidance, Adaptive Cruise Control (ACC), to advanced braking systems, to park assist or backup camera and surround view, ADAS functions and semi-autonomous (copilot) applications require multiple sensors integration. ADAS sensors are still treated as an independent and separate systems [1]. Each system has its own purpose and either displays information or performs an activity without consideration for any other ADAS systems. Depending on the type of sensor technology (radar, camera, ultrasound, light detection and range), Figure 1. Onboard sensors and perception functionalities embedded in the PerSEE platform. this allows certain functionality, but does not make the best use of the systems. To build fully autonomous cars, it is necessary to combine the information and data from different sensors, exploiting their individual advantages and making up for the weaknesses each individual system always has. This is called sensor fusion. Instead of multiple, completely independent systems, the various ADAS systems feed their information into a central sensor fusion electronic control unit (ECU) that can combine all of the information to provide better situational awareness. Depending on the system partitioning chosen, either raw data (e.g., uncompressed video) or preprocessed data (e.g., object data from the radar or extracted primitives) is provided to the fusion ECU. This has a big impact on the processing power demands ofthefusionecu,aswellasthetypeofcommunication interfaces to the subsystems (the individual ADAS modules supplying the sensor data). Most of the ADAS found in the literature are focused on the perception algorithms: experiments are ran offline on recorded data [2] or executables are developed on PCs (installed in vehicle s trunk) with general purpose processors. In case of embedded multi-modal system, several PCs are required: one for each type of vision sensors, one for inertial measurement and GPS, one for central fusion. For instance, in [3] a vehicle prototype developed for valet parking,the perception 250

subsystem consists primarily in a cluster of 6 PCs to handle a set of 12 sonars, a stereo-camera rig and 4 fish-eye cameras, a GPS receiver and an IMU. CPUs are indeed easy to use (tools, OS, middleware) but the efficiency is often inadequate and memory bandwidth is a strong bottleneck. In this paper, we present our effort to develop a central fusion sensors ECU called PerSEE. Vision-based state-of-the-art algorithms previously developed for research purpose were implemented and distributed in processors of a main fusion electronic board and on smart-cameras board. Four cameras (2 front, 2 rear) are used for stereobased detection and tracking of obstacles, as well as for road marking detection, and multi-lane detection and tracking. A Radar is used for object detection and tracking. Ego-vehicle proprioceptive data (inertial measurement from vehicle s CAN bus) are used to predict vehicle s evolution area. Fig. 1 illustrates the sensors configuration. These information are combined, by a multi-sensors fusion module based on belief theory, to select the track that constitute an obstacle or the preceding vehicle and identify the ego-lane the vehicleistravelingon. The embedded hardware architecture of the full PerSEE platform is detailed, with block diagrams to illustrate the distribution of the algorithm on the different processors and electronic boards. The communicationsinterfacesaswellasthedevelopmentenvironment are described. This architecture of embedded sensors and algorithms produces a local perception map supplies information about two important features of the driving environment: the roadway and the obstacles. The goal of this fusion ECU is to provide enough attributes about those two features in order to develop functional, reliable and robust applications such as ACC, vehicle following and lane keeping. 2 Perception functionalities embedded in the PerSEE platform The important information needed by an automotive assistance system is the ego-vehicle state, the obstacle detection and tracking, the road markings and lanes detection. 2.1 Ego-vehicle attributes In the research area of driver assistant applications, the localization module was primarily used for navigation problems. This means, the positioning data of a vehicle were mainly informative. In the last few years, research has been conducted for designing driver assistant systems with more active control. The next step of the car revolution is the self-driving cars. Therefore, the knowledge of the accurate positioning of a vehicle on road is now essential and critical. In fact, knowing the vehicle localization is and will be useful in assistant systems which act as co-driving systems, and prevent from road departure or warn when speed is excessive. In the building of local and extended map, this localization information is crucial because it represents the absolute reference used in order to get a global overview in a same frame of the scene actors (obstacles, infrastructure, environment... ). Figure 2. Predictive evolution zone of the egovehicleiscomputedfrominertialmeasurements (odometer and gyrometer). In the collaborative approach of the PerSEE platform, a prediction of vehicle s evolution zone is computed from odometer and gyrometer extracted from the CAN bus. Fig. 2 illustrated the definition of the evolution area, relative to vehicle s yaw and speed. In the central fusion process, this zone gives a region of focus the other ADAS functions. 2.2 Stereo-based obstacles detection This information is necessary to comply with safety distances, for traffic analysis purpose, for assistance in overtaking maneuvers or for the management of a platoon. For vehicles detection and 3D localization, an approach based on a stereo-vision rig was proposed in [4]: it uses the V-disparity a cumulative space estimated from the disparity image. A tracking system is also implemented based on the belief theory. The tracking task is done on the image space which takes into account uncertainties, handles conflicts, and automatically dealt with targets appearance and disappear as well as their spatial and temporal propagation. 2.3 Road markings and lanes detection and tracking For road marking and ego-lane identification, the algorithm proposed in [5] was implemented in the PerSEE platform on four cameras: 2 front and 2 rear view. The algorithm is illustrated by Fig. 3: it is based on a three-step process, road primitives extraction, road markings detection and tracking, lanes shape estimation. The detection step used a robust poly-fitting based on the point intensity of extracted points. Figure 3. General architecture of the road marking and lanes detection and tracking. 251

These data provide information about road surface attributes surrounding the ego-vehicle. This information provides a description of traffic lanes based mainly on optical sensor (camera). This information is useful in order to identify a dangerous obstacle (in the ego-vehicle lane) and risky situation (intersection area, traffic jam, accident location), and to provide data for path planning services (follow a current lane, change of lane, reduce the speed before an intersection or a turn). 3 PerSEE : local perception map by cooperative fusion 3.1 Cooperative fusion of vision algorithms In order to track an obstacle or the preceding vehicle, a first track selection module is implemented, following 3 criteria: track must be the closest and have a consistent confidence level, track must be within the evolution area, track must be in the ego-lane. The last two criteria are ensured using intersection between said region (ego-lane or evolution area) and track s hull in order to generate a presence probability. Track s hull is estimated from either the covariance matrix or width and height attributes coming from the radar. For the obstacles feature, two types of sensor are deployed: stereo rig and Radar. Amongst the attributes provided by the Radar are different probabilities (being an obstacle, existence), spatial extent and type of mobile. The road marking detection and multi-lane detection and tracking are performed using the front and rear cameras. Road marking primitives extraction algorithm is implemented in each camera SoC featuring an ARM11. The extracted primitives are communicated by a Gigabytes Ethernet bus. Lane polynomial fitting and tracking are implemented in one of the core of the SEEKLOP imx6 board. A multi-source fusion by belief theory is implemented: its input and output are formatted to be homogeneous to a Radar data frame. In our architecture the stage of track selection can be realized before or after the multi-source fusion stage. Figure 4. PerSEE platform: overview of the sensors fusion architecture. The full architecture for local perception map building is illustrated by Fig. 4. Prototyping of the vision modules was done using simulated data first, and then offline with real data recorded by a vehicle equipped with onboard sensors. Figure 5. Embedded architecture of PerSEE: functional block diagram. 3.2 Hardware embedded architecture The implementation of all the functions on a dedicated hardware architecture required an ensemble carefully optimized in terms of memory allocation management and memory data access. Those optimizations mainly concerned the dense disparity map computation and the primitive extraction. The hardware architecture is a imx6q board with an ARM9Q (quad core) processor with 4 cores. With a 800MHz clock frequency, it features a 256 Mo internal RAM, a IEEE 1588 1Gbps Ethernet bus, several USB ports and a port for a CAN bus FlexCAN. Added to that board, another board manages the ADAS CAN bus to exchange data with the vehicle: inertial measurements (speed, acceleration, yaw rate) and information of the local perception map (marking attributes, selected tracks). A specific CAN bus called RADAR CAN is used to receive data from the Radar, as the bandwidth for this exchange is huge enough to impact the synchronization with other sensors. Grabbed images and extracted primitives are transmitted by a Gigabytes Ethernet LAN. We selected this board for its multi-core capabilities enabling to distribute the algorithms on separated cores: Primitives extraction are deported on each camera, on an ARM11-based board. Core1isinchargeofroadmarkingandlanedetection and tracking. Core 2 is dedicated to target detection by dense stereovision. Core 3: multi-sensors fusion by belief theory. Core 4: tracks selection and filtering; predictive evolution area. Fig. 5 illustrates the distribution in the hardware architecture. 4 Applications 4.1 From Software-in-the-loop to Vehicle-in-theloop SiVIC TM [6] is a platform used to prototype virtual sensors (cameras, radar, laser, GPS...). Its objective is 252

platform, offline mode by replay of real data previously recorded with an experimental vehicle with onboard sensors, full real-time/real-life mode with an equipped vehicle. The two first mode are Software-in-the-loop (SIL) systems and use virtual reality or recorded data. They are useful to prototype, validate and test ADAS applications under various conditions before implementing them in a real vehicle. The third mode is the ultimate one as it is a Vehicle-in-the-loop (VIL) system. The resulting SIL/VIL platform provide a complete environment for designing, prototyping, testing and validating pre-industrial embedded perception application and ADAS. 5 From perception to vehicle control Figure 6. PerSEE, SiVIC and RTMaps data flow diagram. to reproduce, in the most truthful manner, the realistic aspect of a situation, the behavior of a vehicle and the functioning of the sensors that could be embarked on such vehicle. The main advantages of SiVIC are to simulate situations that are difficult to reproduce in real life (such as obstacle avoidance, vehicle fleet), and to allow the use of several sensors. RTMaps TM [7] is an asynchronous high performance platform designed as an easy-to-use framework to develop and deploy C++ codes on embedded ARMbased CPU. With its datalogging capabilities for timestamped data streams (record and playback), it enables to switch back and forth between real-time execution on the targeted systems and offline developments. We have interconnected the PerSEE hardware platform with the simulation platform SiVIC. In this configuration coupling reality and virtual world, SiVIC s virtual sensors outputs are transmitted to the SEEK- LOP board through CAN interfaces available in RTMaps. CAN buses are used to communicate inertial data and Radar frames: 2 CAN buses are implemented (CAN RADAR, CAN ADAS in Fig. 6). Images (1/4 PAL) are transmitted through a Gigabit Ethernet wired local network; extracted primitives also transit through this network. The local perception map is transmitted to the applicative part via a CAN bus: ADAS that take control of powertrain, braking, and warning system. Data used for monitoring purpose (infotainment system and display) transit through a secondary GbE stream. Three possible configurations to interface and test the PerSEE platform are available: processing in real-time simulated data from SiVIC Most automotive vision-based ADAS are passive, and are only used to alert the driver. We have tested the PerSEE platform in an automotive system that control brakes and longitudinal acceleration. In an European FP7 project, the full system was integrated in a TATA evista vehicle to implement a Smart and Green ACC [8] function. In this prototype, a control layer uses the local map to handle speed regulation and distance regulation that match ACC requirement. The goal is to help the driver optimize the speed profile and distance regulation of an ACC for an electric vehicle with regenerative capacity. In this prototype, the short range perception is ensured by the PerSEE platform, relying on a frontal stereo camera rig (obstacle and lane), two rear cameras (lane) and a front Radar (obstacle), as illustrated in Fig. 1. From perception to control, information is exchanged through the ADAS CAN, the control layer beingembeddedinthevehicleecu;testswereran successfully in Gaydon (UK). 6 Conclusion and perspectives Other vision algorithms such as night vision and fog vision enhancement are under study in our lab to be augment the local map with attributes of the environment feature. Those information would enable a higher level of quality and operating range of cameras. Even though the PerSEE unit aims at pre-industrial embedded systems, ADAS are safety-critical systems. Oneoftheperspectiveofthisworkistoaddasafety processor unit in PerSEE. One can notice the redundancies in information: target detection with radar and stereovision, evolution area and ego-lane; these redundancies could be used in a diagnostic module. It would be the next step in the hardware development of PerSEE sensor fusion ECU. References [1] F. Porikli and L. Van Gool, Special issue on car navigation and vehicle systems, Machine Vision and Applications, vol. 25, no. 3, pp. 545 546, 2014. 253

[2] S. A. Rodriguez Florez, V. Fremont, P. Bonnifait, and V. Cherfaoui, Multi-modal object detection and localization for high integrity driving assistance, Machine Vision and Applications, vol. 25, no. 3, pp. 583 598, 2014. [3] P. Furgale, P. Newman, R. Triebel, H. Grimmett, and e. al, Toward automated driving in cities using closeto-market sensors, an overview of the v-charge project, IEEE Intelligent Vehicles Symposium, 2013. [4] N. Fakhfakh, D. Gruyer, and D. Aubert, Weighted v- disparity approach for obstacles localization in highway environments, in Intelligent Vehicles Symposium (IV), 2013 IEEE, 2013. [5] M. Revilloud, D. Gruyer, and E. Pollard, An improved approach for robust road marking detection and tracking applied to multi-lane estimation, in Proceedings of IEEE Intelligent Vehicle Symposium (IV 13), 2013. [6] D. Gruyer, C. Royere, N. dulac, G. Michel, and J. Blosseville, Sivic and rtmaps, interconnected platforms for the conception and the evaluation of driving assistance systems, in ITS World Congress, 2006. [7] J. Perez, D. Gonzalez, F. Nashashibi, G. Dunand, F. Tango, N. Pallaro, and A. Rolfsmeier, Development and design of a platform for arbitration and sharing control applications, in Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XIV), 2014 International Conference on. IEEE, 2014, pp. 322 328. [8] S. Glaser, O. Orfila, L. Nouveliere, and D. Gruyer, Enhanced acc for an electric vehicle with regenerative capacity: Experimental results from efuture project, in Proc. of the 12th International Symposium on Advanced Vehicle Control (AVEC 2014), Tokyo, Japan, 2014. 254