Intelligent Technology for More Advanced Autonomous Driving

Similar documents
Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving

Deliverable D1.6 Initial System Specifications Executive Summary

ITS Radiocommunications in Japan Progress report and future directions

VSI Labs The Build Up of Automated Driving

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

Automated Testing of Autonomous Driving Assistance Systems

The GATEway Project London s Autonomous Push

A Winning Combination

ITS radiocommunications toward automated driving systems in Japan

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

GNSS in Autonomous Vehicles MM Vision

Traffic Management for Smart Cities TNK115 SMART CITIES

CS686: High-level Motion/Path Planning Applications

Current Technologies in Vehicular Communications

Devid Will, Adrian Zlocki

Connected Car Networking

Using FMI/ SSP for Development of Autonomous Driving

A Roadmap for Connected & Autonomous Vehicles. David Skipp Ford Motor Company

White paper on CAR150 millimeter wave radar

DENSO

Volkswagen Group: Leveraging VIRES VTD to Design a Cooperative Driver Assistance System

Dr George Gillespie. CEO HORIBA MIRA Ltd. Sponsors

Function architectures relevance in automotive research and education Bengt Jacobson, Chalmers

White paper on CAR28T millimeter wave radar

Tech Center a-drive: EUR 7.5 Million for Automated Driving

Applications of Millimeter-Wave Sensors in ITS

Using Vision-Based Driver Assistance to Augment Vehicular Ad-Hoc Network Communication

Development of Gaze Detection Technology toward Driver's State Estimation

HAVEit Highly Automated Vehicles for Intelligent Transport

Virtual Homologation of Software- Intensive Safety Systems: From ESC to Automated Driving

Situational Awareness A Missing DP Sensor output

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles

interactive IP: Perception platform and modules

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Invited talk IET-Renault Workshop Autonomous Vehicles: From theory to full scale applications Novotel Paris Les Halles, June 18 th 2015

Tsuyoshi Sato PIONEER CORPORATION July 6, 2017

Israel Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings. Amos Gellert, Nataly Kats

Automotive Applications ofartificial Intelligence

RECOMMENDATION ITU-R M.1310* TRANSPORT INFORMATION AND CONTROL SYSTEMS (TICS) OBJECTIVES AND REQUIREMENTS (Question ITU-R 205/8)

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

THE EXPANSION OF DRIVING SAFETY SUPPORT SYSTEMS BY UTILIZING THE RADIO WAVES

Deployment and Testing of Optimized Autonomous and Connected Vehicle Trajectories at a Closed- Course Signalized Intersection

Adaptive Controllers for Vehicle Velocity Control for Microscopic Traffic Simulation Models

Use of Probe Vehicles to Increase Traffic Estimation Accuracy in Brisbane

DENSO www. densocorp-na.com

Simulationbased Development of ADAS and Automated Driving with the Help of Machine Learning

Embracing Complexity. Gavin Walker Development Manager

Evaluation of Actuated Right Turn Signal Control Using the ITS Radio Communication System

Final Report Non Hit Car And Truck

Autonomous Vehicle Simulation (MDAS.ai)

Comparison of Simulation-Based Dynamic Traffic Assignment Approaches for Planning and Operations Management

DESIGN AND CAPABILITIES OF AN ENHANCED NAVAL MINE WARFARE SIMULATION FRAMEWORK. Timothy E. Floore George H. Gilman

Development of a 24 GHz Band Peripheral Monitoring Radar

Unlock the power of location. Gjermund Jakobsen ITS Konferansen 2017

Virtual testing by coupling high fidelity vehicle simulation with microscopic traffic flow simulation

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

Development & Simulation of a Test Environment for Vehicle Dynamics a Virtual Test Track Layout.

Intelligent Traffic Signal Control System Using Embedded System

Session 2: New tools for production support Does technology do it all? Reflections on the design of a tramway cockpit

TECHNOLOGY DEVELOPMENT AREAS IN AAWA

Frank Heymann 1.

A SERVICE-ORIENTED SYSTEM ARCHITECTURE FOR THE HUMAN CENTERED DESIGN OF INTELLIGENT TRANSPORTATION SYSTEMS

Humans and Automated Driving Systems

Intelligent Tyre Promoting Accident-free Traffic

Stanford Center for AI Safety

A.I in Automotive? Why and When.

Distributed Robotics From Science to Systems

TRB Workshop on the Future of Road Vehicle Automation

Author s Name Name of the Paper Session. DYNAMIC POSITIONING CONFERENCE October 10-11, 2017 SENSORS SESSION. Sensing Autonomy.

Vision Based Intelligent Traffic Analysis System for Accident Detection and Reporting System

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

76-GHz High-Resolution Radar for Autonomous Driving Support

Fujitsu, SMU, and A*STAR collaborate on traffic management technologies with the Maritime and Port Authority of Singapore

ADAS & ADASIS v2. Sinisa Durekovic, NAVTEQ

Intelligent Transport Systems and GNSS. ITSNT 2017 ENAC, Toulouse, France 11/ Nobuaki Kubo (TUMSAT)

Automatic Maneuver Recognition in the Automobile: the Fusion of Uncertain Sensor Values using Bayesian Models

High Precision Relative Positioning and Slot Management for ad-hoc Networks as Examples for Traffic Applications of Galileo

Image Processing Based Vehicle Detection And Tracking System

New Global R&D Organization for Customer-oriented Innovations

Autonomous driving technology and ITS

Transport sector innovation and societal changes

[Overview of the Consolidated Financial Results]

A Matter of Trust: white paper. How Smart Design Can Accelerate Automated Vehicle Adoption. Authors Jack Weast Matt Yurdana Adam Jordan

AUGMENTED REALITY IN URBAN MOBILITY

The Building Blocks of Autonomous Control. Phil Magney, Founder & Principal Advisor July 2016

Applying Multisensor Information Fusion Technology to Develop an UAV Aircraft with Collision Avoidance Model

AI Application Processing Requirements

Autonomy, how much human in the loop? Architecting systems for complex contexts

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System

Effective Collision Avoidance System Using Modified Kalman Filter

Making Vehicles Smarter and Safer with Diode Laser-Based 3D Sensing

Distributed Collaborative Path Planning in Sensor Networks with Multiple Mobile Sensor Nodes

Project Overview Mapping Technology Assessment for Connected Vehicle Highway Network Applications

AI for Autonomous Ships Challenges in Design and Validation

Transformation to Artificial Intelligence with MATLAB Roy Lurie, PhD Vice President of Engineering MATLAB Products

Knowledge-based Reconfiguration of Driving Styles for Intelligent Transport Systems

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

ARGUING THE SAFETY OF MACHINE LEARNING FOR HIGHLY AUTOMATED DRIVING USING ASSURANCE CASES LYDIA GAUERHOF BOSCH CORPORATE RESEARCH

Transcription:

FEATURED ARTICLES Autonomous Driving Technology for Connected Cars Intelligent Technology for More Advanced Autonomous Driving Autonomous driving is recognized as an important technology for dealing with emerging societal problems that include traffic accidents, the aging population, and the diminishing workforce. Various companies are accelerating the pace of technology development with the aim of extending use of the technology from highways to ordinary roads from 2020 onwards. In contrast to the comparatively uniform driving environment on highways, the situation on ordinary roads is more complex and so calls for more advanced forms of autonomous driving. In order to achieve autonomous driving suitable for ordinary roads, Hitachi is striving to apply and commercialize sensing and decisionmaking techniques and AI. This article describes examples of work on dynamic maps, model predictive control, and an AI implementation technique, and also the prospects for the future. Takao Kojima Kenichi Osada, Ph.D. Hiroaki Ito Yuki Horita Teppei Hirotsu Goichi Ono, Dr. Eng. 1. Introduction With reducing traffic accidents and the associated deaths and injuries being one of the major challenges facing the automotive sector, there is considerable activity in the research and development of technology for preventive safety to assist driving and for autonomous driving to replace the functions of the human driver. Both sensing techniques for determining what is happening around the vehicle and recognition and decision-making techniques for safely driving the vehicle through the detected environment are important for achieving high levels of driving assistance and autonomous driving. If artificial intelligence (AI) is then added to this mix, it opens up the potential for driving in more complex environments. This article describes work being done by Hitachi on technologies for recognition and decisionmaking and for the application of AI to support more advanced autonomous driving. 2. Overview of Autonomous Driving and Work on Making Technology More Intelligent 2. 1 Overview of Sensing, Recognition, Decisionmaking, and Control for Autonomous Driving Autonomous driving is implemented using sensing, recognition, decision-making, and control (see Figure 1). Sensing means the use of stereo cameras, 58.

Figure 1 Block Diagram of Autonomous Driving System Autonomous driving is implemented using sensing, recognition, decision-making, and control. Recognition Decision-making and control Sensing C2X/TCU MPU (map) GNSS Camera Radar C2X information (other vehicles, traffic signals, etc.) Identifications (weather conditions, congestion, etc.) Precise localization Detected objects (road markings, signage, road edge, etc.) Sensor fusion Combining of detected objects Objects (vehicles, pedestrians) Traffic signals, road markings and signage Combining of conditions in driving area Vehicle location and orientation Detected objects Map fusion Combined information from map and detected objects Conditions in driving area (exclusive grid map) Motion prediction of objects Overtaking of allowmoving vehicles Object motion prediction Dynamic map generation Formulation of path and spatial maps Details of traffic rules for path, road topography, and objects, etc. Formulation of risk map Formulation of recognition results in form suitable for evaluating trajectories Obtain numeric indicator of risk level in driving area 30 Path, spatial, and risk maps Vehicle movement planning Vehicle movements Generation of candidate trajectories Candidate trajectories Trajectory evaluation Dotted-line trajectory Driving along the chosen trajectory Control command values Actuators C2X: car to X TCU: telematics communication unit MPU: map position unit GNSS: global navigation satellite system radar, or other sensors to detect objects. Recognition is made up of sensor fusion, meaning the combining of sensing information from the various sensors (vehicles, pedestrians, signage, road markings, and so on); the precise identification of vehicle location; map fusion, meaning the merging with the map of objects identified by sensing; motion prediction of objects, meaning predicting the behavior of objects around the vehicle; and dynamic map generation, meaning the creation of path, spatial, and risk maps to express this information as data in a form that can be used for decision-making and control. Decision-making and control, in turn, is made up of vehicle movement planning, the generation of candidate trajectories, trajectory evaluation, and driving the vehicle along the chosen trajectory. Vehicle movement planning manages the status of all aspects of autonomous driving and generates the overall vehicle movement at the lane level (such as choosing which lane to drive in). The generation of candidate trajectories is performed based on factors such as the dynamic characteristics of the vehicle. Trajectory evaluation takes account of upcoming risks to choose the best of the candidate trajectories. Driving the vehicle along the chosen trajectory is done by calculating the control command values to send to the actuators. 2. 2 Work on Making Technology Intelligent Enough for Level 4 and Ordinary Roads As extending the use of autonomous driving to include ordinary roads in the future and reaching level-4 or level-5 automation will involve dealing with situations that are difficult to handle using conventional rule-based control, the adoption of new intelligent technologies such as deep learning or model predictive control will be needed. Technical innovation over recent years has made possible better-than-human levels of sensing and identification in image processing applications using deep learning, and also the ability to predict the movement of nearby vehicles and other objects. It has also become possible to generate more appropriate vehicle paths than can be obtained by rule-based designs, including by using model predictive control to take account of the predicted movements of surrounding objects when generating paths. Hitachi Review Vol. 67, No. 1 058 059 59.

Figure 2 Structure of Dynamic Map (Extension of ADASISv2 Protocol) A two-tier structure is used that is split between an abstracted representation at the road level (layer 1) and a detailed representation at the lane level (layer 2) (1). Similarly, two different coordinate systems are used to represent information about the surrounding environment using, respectively, coordinates relative to the vehicle path and relative spatial coordinates (2). Layer 1: ADASISv2 protocol (road-level) Offset 0 Offset 100 Path 9 Path 8 Stub Stub Path 10 (1) Layer 2: Extended protocol (lane-level) Path map Lane 1 Lane 2 Lane 4 (Opposite) Offset 100 30 Offset 120 Lane 3 Lane 11 40 Path 8 Lane 21 Lane 1 Lane 2 Lane 4 Spatial map 30 Lane 3 Lane 11 Lane 31 40 Recommended lanes Lane 31 Offset 30 Path 10 Lane 21 ADASIS: advanced driver assistance systems interface specification (2) While these calculations tend to impose a heavier computational load than past methods, embedded system devices capable of such computation have become available and are starting to be installed on vehicles equipped for autonomous driving. However, the large number of different deep learning algorithms that exist means that appropriate methods need to be chosen and the computational load reduced before this technology can be put into practical use. Although still at the research and development phase, it is anticipated that this technology will be crucial to autonomous driving. 3. Dynamic Maps Vehicles equipped for autonomous driving need to recognize with high accuracy what is happening in the driving environment (other vehicles, intersections and so on) based on data from sensors and maps, and to pass this information to the autonomous decision-making (control) functions with a certain data representation. A typical example of the data representation already in use is the Advanced Driver Assistance Systems Interface Specification (ADASIS), an industry standard interface to provide static digital maps for advanced driver assistance systems (ADASs). This standard has been applied to the development of longitudinal speed control techniques, such as adaptive cruise control (ACC). It provides a way of representing relative position in information about the surrounding environment along the road (vehicle path). However, at this moment, it is not fit for autonomous driving including lateral driving control (i.e., steering control), because it has yet to be applied to representing detailed topography at the level of lanes. Moreover, since the lane-level detailed representation leads to larger data size and complexity in use, ways of representing this data efficiently and easily will be needed to enable control by electronic control units (ECUs) with limited computing and memory capacity. Accordingly, Hitachi has developed a hierarchical hybrid data representation method to efficiently and flexibly provide the detailed lane-level information about the surrounding environment that is needed for autonomous driving. The method has the following two main features (see Figure 2). (1) A two-layered structure, an abstracted representation at the road level (layer 1) and a detailed representation at the lane level (layer 2) (2) Two different coordinate systems for representing information about the surrounding environment using, 60.

FEATURED ARTICLES Table 1 Uses for AI Targeted by Hitachi AI can be broadly divided into three types and Hitachi is investigating the best uses for each. Type Features Applications Machine-learning-based (neural networks) Uses machine learning for automatic learning of characteristic values Better sensing by cameras Prediction of behavior of nearby people or vehicles, etc. Using statistics and probability (big data analytics) Uncovers correlations in large data sets that would not be noticed by people. Reduce passenger stress Identification of driver characteristics Algorithm-based (model predictive control) Simple technique for using approximate models to solve complex calculations Functions that require high reliability (such as trajectory planning) respectively, coordinates relative to the vehicle path and relative spatial coordinates Layer 1 of the two-layered structure uses the ADASIS protocol that is already widely deployed in actual products, while layer 2 provides the additional representation required for autonomous driving. This provides support for autonomous driving while still preserving compatibility with existing products using the ADASIS protocol. For the two different coordinate systems, the method for representing information relative to the vehicle path provides a quick way to assess the surrounding environment at the macroscopic level, while the relative spatial coordinates enable a precise microscopic assessment. The ability to choose between the two different ways of representing information as needed facilitates the flexible development of diverse ADAS applications that include autonomous driving. In terms of how the two coordinate systems are used, whereas the coordinate relative to the vehicle path, mainly used for longitudinal (long-term) control, requires the provision of information over a wider area in the order of kilometers, the relative spatial coordinate requires at most several hundred meters or so considering the high precision required for lateral control. With this property in terms of requirements, and by limiting the scope of information provided using relative spatial coordinates to just the immediate vicinity, the amount of data provided to the decisionmaking (control) functions of autonomous driving can be considerably reduced. 4. Use of AI for Recognition and Decision-Making Expanding the use of autonomous driving from highways to ordinary roads will require advanced recognition and decision-making techniques that include not only the sensing of pedestrians, vehicles, and other objects, but also the ability to predict their movements. It will also require the ability to take account of these movement predictions when making turns at intersections so as to select a path and speed that are both safe and comfortable. Past methods have mainly involved the development of rule-based algorithms that itemize the potential movements of nearby objects based on what is happening around the vehicle and then drive the vehicle in such a way that it can cope with these possibilities. Unfortunately, because the number of combinations of potential movements by objects when driving on ordinary roads is so large, designing in the ability to cover all of these without any omissions in impractical. Instead, Hitachi has been looking at using AI to enable autonomous driving in complex environments. AI can be broadly divided into three different techniques, and Hitachi is investigating the best ways of using each of these (see Table 1). The first is the neural network. Hitachi is investigating techniques for detecting nearby objects from camera video with greater precision, and for performing learning on the movements of other nearby vehicles and pedestrians in order to predict their future movements. In parallel with this study of algorithms, other work is aimed at simplifying (pruning) the resulting networks. This is explained further in the following section. The second technique is big data analytics. Hitachi has developed its own AI called Hitachi AI Technology/H (AT/H). AT/H can automatically identify the elements that correlate strongly with key performance indicators (KPIs) in large and complex data sets. One example is a study that is using AT/H to analyze the movements of a vehicle under manual control and the surrounding conditions so that the Hitachi Review Vol. 67, No. 1 060 061 61.

Path Figure 3 Neural Network Pruning Technique The technique reduces the computational load while maintaining sensing accuracy by omitting calculations in which the weighting coefficient is small. X1 X2 X3 W1 W2 W3 Z=X1 W1+X2 W2+X3 W3 (a) Example node calculation To overcome these problems, Hitachi has been investigating a pruning technique that reduces the computational load while keeping the impact on sensing accuracy to a minimum by omitting calculations in which the weighting coefficient is small. Figure 3 (c) shows an example of a pruned network. This provides an efficient way to implement large neural networks on ECUs when they are needed for autonomous driving with high precision. (b) Example three-layer network (c) Example outcome of pruning findings can be incorporated into vehicle control in order to achieve reliable and comfortable autonomous driving that is closer to that of a human driver. The third form of AI is model predictive control. This is described in more detail below. 4. 1 Pruning Technique for Neural Networks Figure 3 (a) shows a simple representation of how a node works. As shown in the figure, the input signals (X1, X2, and X3) are multiplied by their weighting coefficients (W1, W2, and W3) and the sum of the results is output. Figure 3 (b) shows an example of a three-layer neural network using these nodes. The neural network requires a large number of multiplications and additions for each node, making it difficult to implement in real time on an ECU with limited computing and memory capacity. 4. 2 Model Predictive Control Model predictive control predicts the control output x for a control input u, and searches for the control input u that minimizes a cost function H representing control performance within a fixed time (t), treating it as an optimization problem. Figure 4 shows an example of model predictive control used to generate vehicle trajectories. It is made up of the generation of candidate trajectories and the cost function calculation. The generation of candidate trajectories searches for the optimal trajectory using an optimization solver that works by testing the vehicle trajectories output by the cost function calculation. Optimization solvers can be broadly divided into iterative methods that use the derivative of the cost function, and heuristic methods that search for the solution directly using trial and error. While iterative methods impose less of a computational load, they risk getting stuck on a local solution that is not the best possible. Heuristic methods, in contrast, although able to search for the optimal solution over a wide range, impose a heavy computational load because they work by trial and Search loop Generation of candidate vehicle trajectories (ABC) t=0 t=0.1 t=10 Risk map Vehicle positions [ x (0), x (1),..., x( n)] Cost function calculation H=H1+H2 H1: Collision risk H2: Ride comfort Φ( x, y ) H1= Φ( x, y ) dxdy s S Figure 4 Generation of Trajectories Using Model Predictive Control The generation of trajectories using model predictive control includes both generating candidate vehicle trajectories and calculating a cost function. ABC: artificial bee colony algorithm 62.

FEATURED ARTICLES error. Fortunately, advances in computers over recent years have opened up the possibility that the calculations can be performed fast enough for use in real-time control. Accordingly, Hitachi undertook a comparison of various heuristic methods (genetic algorithms, particle swarm optimization, and the artificial bee colony algorithm) for use in this way, selecting the artificial bee colony algorithm on the basis of its ability to handle a large number of variables (scalability), ability to avoid local solutions, execution speed, and ease of parallel implementation. The cost function H calculates the suitability of each trajectory, using the candidate future vehicle locations x(k) (k=0, 2, n) output by the generation of candidate trajectories as its inputs. The cost function H is made up of two terms: H1 indicating the likelihood of a collision with a moving object and H2 indicating the level of ride comfort. The collision likelihood H1 is obtained by integrating the risk map output by the function for motion prediction of objects over the region S occupied by the vehicle. As the level of ride comfort is deemed to be better the lower the vehicle acceleration or rate of change of acceleration, H2 is calculated by integrating the squares of these two parameters over time. In this way, the calculation is able to determine a trajectory for the vehicle through a complex environment encompassing a number of moving objects that avoids collisions and maintains ride comfort. 5. Conclusions Along with more advanced autonomous driving and driving assistance and broadening their scope of application, the systems used for driving also require a high level of safety. Together with evaluation and testing techniques, Hitachi intends to continue working toward the early implementation of autonomous driving systems that can help overcome societal challenges by developing the recognition, decision-making techniques, and intelligent technologies described in this article. References 1) Y. Horita et al., Extended Electronic Horizon for Automated Driving, ITS Telecommunications (ITST), 2015 14th International Conference (Dec. 2015). 2) T. Hirotsu et al., Efficient Implementation of Non-Linear Model Predictive Control on an Embedded ECU for Automated Driving in Urban Environments, ETNET2016 (Mar. 2016) in Japanese. Authors Takao Kojima System Control Research Department, Center for Technology Innovation Controls, Research & Development Group, Hitachi, Ltd. Current work and research: Development of autonomous driving systems. Kenichi Osada, Ph.D. Advanced Sensing Technology Development Department, Advance Development Center, Technology Development Division, Hitachi Automotive Systems, Ltd. Current work and research: Development of autonomous driving systems. Society memberships: IEEE Fellow. Hiroaki Ito Advanced Sensing Technology Development Department, Advance Development Center, Technology Development Division, Hitachi Automotive Systems, Ltd. Current work and research: Development of autonomous driving systems. Yuki Horita System Productivity Research Department, Center for Technology Innovation Systems Engineering, Research & Development Group, Hitachi, Ltd. Current work and research: Development of autonomous driving systems. Teppei Hirotsu System Control Research Department, Center for Technology Innovation Controls, Research & Development Group, Hitachi, Ltd. Current work and research: Development of electronic control units for autonomous driving systems. Society memberships: The Information Processing Society of Japan (IPSJ), and the Institute of Electronics, Information and Communication Engineers (IEICE). Goichi Ono, Dr. Eng. Information Electronics Research Department, Center for Technology Innovation Electronics, Research & Development Group, Hitachi, Ltd. Current work and research: Development of AI implementations for autonomous driving systems. Hitachi Review Vol. 67, No. 1 062 063 63.