How do you teach AI the value of trust?

Similar documents
Our position. ICDPPC declaration on ethics and data protection in artificial intelligence

Tuning-CALOHEE Assessment Frameworks for the Subject Area of CIVIL ENGINEERING The Tuning-CALOHEE Assessment Frameworks for Civil Engineering offers

OECD WORK ON ARTIFICIAL INTELLIGENCE

OVERVIEW OF ARTIFICIAL INTELLIGENCE (AI) TECHNOLOGIES. Presented by: WTI

Executive Summary Industry s Responsibility in Promoting Responsible Development and Use:

Digital Disruption Thrive or Survive. Devendra Dhawale, August 10, 2018

Gender pay gap reporting tight for time

HealthTech: What does it mean for compliance?

Seeing things clearly: the reality of VR for women. Exploring virtual reality opportunities for media and technology companies

Responsible AI & National AI Strategies

How Explainability is Driving the Future of Artificial Intelligence. A Kyndi White Paper

Machines can learn, but what will we teach them? Geraldine Magarey

AI Frontiers. Dr. Dario Gil Vice President IBM Research

Accelerating growth in a connected Mediterranean region

in the New Zealand Curriculum

HARNESSING TECHNOLOGY

SESAR EXPLORATORY RESEARCH. Dr. Stella Tkatchova 21/07/2015

Framework Programme 7

National approach to artificial intelligence

1 Pay Gap Report 2018

BI TRENDS FOR Data De-silofication: The Secret to Success in the Analytics Economy

Our Corporate Strategy Digital

TRUSTING THE MIND OF A MACHINE

Emerging technology. Presentation by Dr Sudheer Singh Parwana 17th January 2019

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Overview June, 2017

SMART PLACES WHAT. WHY. HOW.

The Information Commissioner s response to the Draft AI Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence

next generation internet Fabrizio Sestini, DG CONNECT

STRATEGIC FRAMEWORK Updated August 2017

Pan-Canadian Trust Framework Overview

FUTURE NOW Securing Digital Success

Can shifting sands be a solid foundation for growth?

ACCENTURE INDONESIA HELPS REALIZE YOUR

The robots are coming, but the humans aren't leaving

Is the X chromosome the X factor for business leadership?

Brief to the. Senate Standing Committee on Social Affairs, Science and Technology. Dr. Eliot A. Phillipson President and CEO

Application of AI Technology to Industrial Revolution

EXPLORATION DEVELOPMENT OPERATION CLOSURE

Transforming while performing Deep Dive: Artificial Intelligence. Hype or not?

Our digital future. SEPA online. Facilitating effective engagement. Enabling business excellence. Sharing environmental information

Assessing the Welfare of Farm Animals

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

CARDIFF BUSINESS SCHOOL THE PUBLIC VALUE BUSINESS SCHOOL

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Overview April, 2017

AI for Autonomous Ships Challenges in Design and Validation

REVISITING ACCOUNTANTS ROLE IN THE ERA OF INFORMATION TECHNOLOGY ADVANCEMENT

Stanford Center for AI Safety

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

9 th AU Private Sector Forum

Présentation de l'initiative européenne "Next Generation Internet"

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Industry 4.0: the new challenge for the Italian textile machinery industry

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. FairWare2018, 29 May 2018

Open Science for the 21 st century. A declaration of ALL European Academies

Appendices master s degree programme Artificial Intelligence

ENABLERS FOR DIGITAL GOVERNMENT: A DATA DRIVEN PUBLIC SECTOR

Climate Change Innovation and Technology Framework 2017

Master Artificial Intelligence

AGENTS AND AGREEMENT TECHNOLOGIES: THE NEXT GENERATION OF DISTRIBUTED SYSTEMS

FOREST PRODUCTS: THE SHIFT TO DIGITAL ACCELERATES

Autonomous Robotic (Cyber) Weapons?

Indiana K-12 Computer Science Standards

Fujitsu Technology and Service Vision Executive Summary

Artificial Intelligence and Law. Latifa Al-Abdulkarim Assistant Professor of Artificial Intelligence, KSU

Towards a Magna Carta for Data

FP7 ICT Call 6: Cognitive Systems and Robotics

Towards Trusted AI Impact on Language Technologies

Sparking a New Economy. Canada s Advanced Manufacturing Supercluster

The 26 th APEC Economic Leaders Meeting

The Deloitte Innovation Survey The case of Greece

Personal Data Protection Competency Framework for School Students. Intended to help Educators

ITAC RESPONSE: Modernizing Consent and Privacy in PIPEDA

DG CONNECT Artificial Intelligence activities

SMART CITY VNPT s APPROACH & EXPERIENCE. VNPT Group

ACCELERATING TECHNOLOGY VISION FOR AEROSPACE AND DEFENSE 2017

Empirical Research on Systems Thinking and Practice in the Engineering Enterprise

Become digitally disruptive: The challenge to unlearn

Disruption Ahead. The Healthcare Industry in the next decade. EY Digital Health and Human Services. LASA Tri-State Conference.

Dr George Gillespie. CEO HORIBA MIRA Ltd. Sponsors

Latin-American non-state actor dialogue on Article 6 of the Paris Agreement

Science Impact Enhancing the Use of USGS Science

DIGITAL WITH PLYMOUTH UNIVERSITY DIGITAL STRATEGY

ServDes Service Design Proof of Concept

By Mark Hindsbo Vice President and General Manager, ANSYS

The Future of Systems Engineering

CONSENT IN THE TIME OF BIG DATA. Richard Austin February 1, 2017

University of Massachusetts Amherst Libraries. Digital Preservation Policy, Version 1.3

Executive summary. AI is the new electricity. I can hardly imagine an industry which is not going to be transformed by AI.

Swiss Re Institute. September 2018 Dr. Jeffrey R. Bohn

Data ethics: digital dilemmas for the 21st century board

Colombia s Social Innovation Policy 1 July 15 th -2014

Artificial intelligence & autonomous decisions. From judgelike Robot to soldier Robot

Appendices master s degree programme Human Machine Communication

Empowering People: How Artificial Intelligence is 07changing our world

APEC Internet and Digital Economy Roadmap

MSc(CompSc) List of courses offered in

London: World class talent and fast growth businesses

ICT4 Manuf. Competence Center

COMMUNICATION SCIENCE MASTER S PROGRAMME

INTEL INNOVATION GENERATION

Transcription:

How do you teach AI the value of trust?

AI is different from traditional IT systems and brings with it a new set of opportunities and risks. To build trust in AI organizations will need to go beyond monitoring the AI system s reliability and performance and consider whether it has the appropriate level of explainability and accountability. Consideration should also be given to broader ethical and societal impacts. Keith Strier EY Global & EY Americas Advisory AI Leader 2 How do you teach AI the value of trust?

Trusted artificial intelligence (AI) explained AI is not a single technology, but a diverse set of methods and tools continuously evolving in tandem with advances in data science, chip design, cloud services and end-user adoption. The most common examples of AI methods and tools include natural language processing, machine learning, deep learning, computer vision, conversational intelligence and neural networks. One fundamental difference between AI and non-ai systems is that a traditional program is coded to execute commands, while an AI is coded to learn. In this way, an AI system has the unique ability to improve performance over time (whether through supervised or unsupervised learning), which is a humanlike capability. Although AI is frequently the headline, the real narrative is much broader. EY recommends a systems-view that goes beyond AI, and emphasizes how robotic, intelligent and autonomous systems are the new tools of digital transformation. As a practical matter, enterprises that are further along in their digital journey will be able to more quickly adopt and realize benefits from AI. This would be characterized by enterprise-scale mobile and cloud infrastructure, agile IT processes, comprehensive data integration and governance, and most critically, a culture that encourages experimentation and rewards constructive failures. AI is being applied toward an ever-wider set of business scenarios, automating activities and tasks traditionally performed by humans. Consequently, it is increasingly important for designers, architects and developers of such systems to be fully aware of downstream and adjacent implications, including social, regulatory and reputational issues. Apart from these, they should also be aware of best practices emerging in ethically aligned design, and governance of intelligent and autonomous systems. It s well established that robotic, intelligent and autonomous systems can malfunction, be deliberately corrupted, and acquire (and codify) human biases in ways that may or may not be immediately obvious. The first step in minimizing these risks is to promote awareness of them, and then proactively design trust into every facet of the system from day one. This trust should extend to the strategic purpose of the system, the integrity of data collection and management, the governance of model training and the rigor of techniques used to continuously monitor system and algorithmic performance. AI technologies differ significantly on the opportunities and risks they create, and therefore it s important that organizations consider what type of AI is appropriate for their particular use case. Before starting an AI project organizations should ensure that the following four conditions have been considered and met to the degree required for their specific use case: Ethics The AI system needs to comply with ethical and social norms, including corporate values. This includes the human behavior in designing, developing and operating AI, as well as the behavior of AI as a virtual agent. This condition, more than any other, introduces considerations that have historically not been mainstream for traditional technology including moral behavior, respect, fairness, bias and transparency. Social responsibility The potential societal impact of the AI system should be carefully considered, including its impact on the financial, physical and mental well-being of humans and our natural environment. For example, potential impacts might include workforce disruption, skills retraining, discrimination and environmental effects. Accountability and explainability The AI system should have a clear line of accountability to an individual. Also, the AI operator should be able to explain the AI system s decision framework and how it works. This is about demonstrating a clear grasp of how AI uses and interprets data, how it makes decisions, how it evolves as it learns and the consistency of its decisions across sub-groups. Reliability The AI system should be reliable and perform as intended. This involves testing the functionality and decision-framework of the AI system to detect unintended outcomes, system degradation or operational shifts not just during the initial training or modelling but also throughout its ongoing operation. Trusted AI explained 3

Building trust in AI EY developed a trusted AI framework to help enterprises understand the slate of new and expanded risks that may undermine trust not only in these systems, but also in products, brands and reputations. The implications of a failed AI cascade beyond operational challenges. It may also lead to litigation, negative media attention, customer churn, reduced profitability and regulatory scrutiny. Of course, this is more than an academic concern, given recent media and regulatory attention around the potential misuse of personal data to power algorithms that influence, if not shape behavior at the societal level. Core to EY s framework is a unique emphasis on the systems in which AI is embedded. This systems-oriented view holds that the risks of AI go beyond the underlying mathematics. To achieve and sustain trust in AI, an enterprise must understand, govern, fine-tune and protect all of the components embedded within and around the AI system. These components include data sources, sensors, firmware, software, hardware, user interfaces, networks as well as human operators, and users. AI system view Messaging Speech, video, image data Sensors Location, geo-spatial Data Process Collaborative Robotics Software Industrial Autonomous systems Social Cloud Storage Devices Compute Platforms Enablers Artificial intelligence Foundational tools and techniques Natural language processing Deep learning Machine learning Image recognition Facial recognition Expert systems Speech Semantic parsing Algorithms Prescriptive Communication, sharing patterns Data protection, privacy expectations Design, UX standards Human-in-the-loop Human behavior Descriptive Analytics Conversational Emotional Speech/video Diagnostic Predictive 4 How do you teach AI the value of trust?

Illustratively, consider the complexity of components within an autonomous vehicle that must work together to deliver its intended value. A network of sensors feed data to an onboard AI system that in turn controls multiple mechanical systems. Each of these components plays a critical role in the successful operation of the whole system, and can also represent a single point of failure in the reliability and performance of that system. Therefore, trusting an autonomous vehicle to fulfill its purpose requires that we collectively trust every component of that system in its individual design and performance. Put differently, trust is achieved, sustained or lost at the system level. Case study: Autonomous driving bus An autonomous bus nears a stop sign and must decide how quickly to stop. Its intelligent navigation executes a complex set of nearinstantaneous decisions and communications between the bus s physical sensors, software and braking mechanism. Multiple components enable the vehicle to perform human-like cognitive functions such as recognizing and identifying the stop sign, interpreting location through GPS positioning, evaluating surrounding objects, and controlling and calibrating the speed of a braking mechanism all based on its perception of speed, road conditions and distance to the stop sign to balance safety with comfort for passengers. Each action introduces the potential for failure: See and recognize stop sign Scan surrounding environment Determine distance to STOP sign with GPS positioning Calculate desired braking pressure Apply brakes Adjust, as conditions change Assess current speed With the increasing impact AI is having on business operations, boards need to understand how AI technologies will impact their organization s business strategy, culture, operating model and sector. They need to consider how their dashboards are changing and how they can evaluate the sufficiency of management s governance over AI, including ethical, societal and functional impacts. They need to take a proactive role in understanding how AI is being used across their business operations and its impact on their risk management and finance functions. Jeanne Boillet EY Global Assurance Innovation Leader Trusted AI explained 5

Creating trust in AI will require both technical and cultural solutions. To be accepted by users, AI must be understandable, meaning its decision framework can be explained and validated. It must also perform as expected and be incorruptible and secure. EY s trusted AI framework emphasizes five attributes necessary to sustain trust: Performance: The AI s outcomes are aligned with stakeholder expectations and perform at a desired level of precision and consistency. Bias: Inherent biases arising from the development team composition, data and training methods are identified, and addressed through the AI design. The AI system is designed with consideration for the need of all impacted stakeholders and to promote a positive societal impact. Resiliency: The data used by the AI system components and the algorithm itself is secured from unauthorized access, corruption and adversarial attack. Explainability: The AI s training methods and decisions criteria can be understood, is documented, and is readily available for human operator challenge and validation. Transparency: When interacting with an AI algorithm, an end user is given appropriate notification and an opportunity to select their level of interaction. User consent is obtained, as required for data captured and used. In practical terms, AI is not implemented, but applied, and when it is applied in the following continuous threestep innovation process with these attributes in mind, the outcome is trusted AI: Purposeful design: Design and build systems that purposefully integrate the right balance of robotic, intelligent and autonomous capabilities to advance welldefined business goals, mindful of context, constraints, readiness and risks. Agile governance: Track emergent issues across social, regulatory, reputational and ethical domains to inform processes that govern the integrity of a system, its uses, architecture and embedded components, data sourcing and management, model training, and monitoring. Vigilant supervision: Continuously fine-tune, curate and monitor systems to ensure reliability in performance, identify and remediate bias, and promote transparency and inclusiveness. Transparent Performance Validation Deployment Monitoring Performance risks Training Purposeful design Algorithmic risks Explainable Problem identification Design risks Agile governance Trusted AI Vigilant supervision Modeling Data risks acquisition Data Data preparation Unbiased Resilient Teaching AI is analogous to parenting a child you need to teach AI not only how to do a task but also all the social norms and values that determine acceptable behaviour. Training AI in an immersive fashion requires that developers build in ethical and risk considerations at the outset of AI design and development. Cathy Cobey EY Global Trusted AI Advisory Leader 6 How do you teach AI the value of trust?

Emerging governance practices in establishing trusted AI Although there is a growing consensus on the need for AI to be ethical and trustworthy, the development of AI functionality is outpacing developers ability to ensure that it is transparent, unbiased, secure, accurate and auditable. There is a need for organization s to develop an AI governance model that embeds ethical design principles into AI projects and overlays existing technology governance structures. Leading practices on establishing a trusted AI ecosystem include: AI ethics board A multi-disciplinary advisory board providing independent advice and guidance on ethical considerations in AI development. Advisors should be drawn from ethics, law, philosophy, technology, privacy, regulations and science. The advisory board should report to and/or be governed by the Board of Directors. AI design standards AI design policies and standards for the development of AI, including an AI ethical code of conduct and AI design principles. The AI design standards should define and govern the AI governance and accountability mechanisms to safeguard users, follow social norms and comply with laws and regulations. AI inventory and impact assessment Validation tools An inventory of all algorithms, including key details of the AI, that is generated using software discovery tools. Each algorithm in the inventory should be subject to an impact assessment to assess the risks involved in its development and use. Validation tools and techniques to ensure that the algorithms are performing as intended and are producing accurate, fair and unbiased outcomes. These tools can also be used to monitor changes to the algorithm s decision framework. Awareness training Independent audits Educating executives and AI developers on the potential legal and ethical considerations for the development of AI, and their responsibility to safeguard an impacted users rights, freedoms and interests. Undergoing independent AI ethical and design audits by a third-party against your AI and technology policies and standards, and international standards to enhance users trust in your AI system. An independent audit would evaluate the sufficiency and effectiveness of the governance model and controls across the AI lifecycle from problem identification to model training and operation. Trusted AI explained 7

Contacts Keith Strier EY Global & EY Americas Advisory AI Leader +1 949 547 5758 keith.strier@ey.com Cathy Cobey EY Global Trusted AI Advisory Leader +1 416 941 1806 cathy.r.cobey@ca.ey.com Jeanne Boillet EY Global Assurance Innovation Leader +33 1 46 93 62 24 jeanne.boillet@fr.ey.com EY Assurance Tax Transactions Advisory About EY EY is a global leader in assurance, tax, transaction and advisory services. The insights and quality services we deliver help build trust and confidence in the capital markets and in economies the world over. We develop outstanding leaders who team to deliver on our promises to all of our stakeholders. In so doing, we play a critical role in building a better working world for our people, for our clients and for our communities. EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. For more information about our organization, please visit ey.com. 2018 EYGM Limited. All Rights Reserved. EYG no. 03880-183Gbl BMC Agency GA 1008437 ED None In line with EY s commitment to minimize its impact on the environment, this document has been printed on paper with a high recycled content. This material has been prepared for general informational purposes only and is not intended to be relied upon as accounting, tax or other professional advice. Please refer to your advisors for specific advice. ey.com