How Explainability is Driving the Future of Artificial Intelligence. A Kyndi White Paper

Similar documents
Our position. ICDPPC declaration on ethics and data protection in artificial intelligence

Privacy and the EU GDPR US and UK Privacy Professionals

OECD WORK ON ARTIFICIAL INTELLIGENCE

Gender pay gap reporting tight for time

Fostering Seed Innovation

CONSENT IN THE TIME OF BIG DATA. Richard Austin February 1, 2017

Science Impact Enhancing the Use of USGS Science

Responsible AI & National AI Strategies

Brief to the. Senate Standing Committee on Social Affairs, Science and Technology. Dr. Eliot A. Phillipson President and CEO

Executive Summary Industry s Responsibility in Promoting Responsible Development and Use:

Q&A with Darren Marble

Global Standards Symposium. Security, privacy and trust in standardisation. ICDPPC Chair John Edwards. 24 October 2016

How do you teach AI the value of trust?

Draft executive summaries to target groups on industrial energy efficiency and material substitution in carbonintensive

Powering Human Capability

National approach to artificial intelligence

CDT Annual Dinner. Center for Democracy and Technology, Washington. 10 March 2015

GLOBAL RISK AND INVESTIGATIONS JAPAN CAPABILITY STATEMENT

The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, United Kingdom; 3

October 6, 2017 DEEP LEARNING TOP 5. Insights into the new computing model

The future of work. Artificial Intelligence series

DoD Research and Engineering

Big Data & AI Governance: The Laws and Ethics

KKR Credit Advisors (Ireland) Unlimited Company PILLAR 3 DISCLOSURES

technologies, Gigaom provides deep insight on the disruptive companies, people and technologies shaping the future for all of us.

Bridging law and technology

School of Informatics Director of Commercialisation and Industry Engagement

Your Law firm marketing

Human vs Computer. Reliability & Competition

UNIVERSITIES AND TECHNOLOGY TRANSFER PATENT ATTORNEYS TRADE MARK ATTORNEYS

How can boards tackle the Essential Eight and other emerging technologies?

Artificial intelligence, made simple. Written by: Dale Benton Produced by: Danielle Harris

Framework Programme 7

Legaltech: making intelligent investments

Enabling ICT for. development

By Mark Hindsbo Vice President and General Manager, ANSYS

Dr George Gillespie. CEO HORIBA MIRA Ltd. Sponsors

The Information Commissioner s response to the Draft AI Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence

EFRAG s Draft letter to the European Commission regarding endorsement of Definition of Material (Amendments to IAS 1 and IAS 8)

APEC Internet and Digital Economy Roadmap

PRELIMINARY AGENDA. Europe s Largest Global Lending and Fintech Event October, 2017 InterContinental London The O2

in the New Zealand Curriculum

Latin-American non-state actor dialogue on Article 6 of the Paris Agreement

Engaging with DARPA. Dr. Stefanie Tompkins. March Distribution Statement A (Approved for Public Release, Distribution Unlimited)

Artificial intelligence & autonomous decisions. From judgelike Robot to soldier Robot

Re: Examination Guideline: Patentability of Inventions involving Computer Programs

Artificial Intelligence in Law: Facts, Futures & Risks

Australian Census 2016 and Privacy Impact Assessment (PIA)

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Robotics, AI and the Law

Guidelines to Promote National Integrated Circuit Industry Development : Unofficial Translation

Halliburton and Baker Hughes Creating the leading oilfield services company

Winners of the McRock IIoT Awards 2018 Announced

LONDON S BEST BUSINESS MINDS TO COMPETE FOR PRESTIGIOUS CHESS TITLE

LEADING DIGITAL TRANSFORMATION AND INNOVATION. Program by Hasso Plattner Institute and the Stanford Center for Professional Development

Prototyping: Accelerating the Adoption of Transformative Capabilities

March 27, The Information Technology Industry Council (ITI) appreciates this opportunity

The Institute for Communication Technology Management CTM. A Center of Excellence Marshall School of Business University of Southern California

The Eco-Patent Commons

Our Corporate Strategy Digital

Information and Communications Technology and Environmental Regulation: Critical Perspectives

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Overview April, 2017

The robots are coming, but the humans aren't leaving

LETTER FROM THE EXECUTIVE DIRECTOR FOREWORD BY JEFFREY KRAUSE

of incumbents expect to increase FinTech partnerships in the next three to five years

EUROPEAN COMMISSION Directorate-General for Communications Networks, Content and Technology CONCEPT NOTE

the Companies and Intellectual Property Commission of South Africa (CIPC)

Michael Barna Financial Advisor You Have Worked Hard To Build Wealth In Life.

ITAC RESPONSE: Modernizing Consent and Privacy in PIPEDA

Section 1: Internet Governance Principles

Foresight at Assurant. October 9, 2015

Future of New Capabilities

About the Office of the Australian Information Commissioner

2

TRUSTING THE MIND OF A MACHINE

The Canadian Navigable Waters Act

Selecting, Developing and Designing the Visual Content for the Polymer Series

Technology Leadership Course Descriptions

Executive Summary. The process. Intended use

Addressing the Innovation Imperative

ADVANCING KNOWLEDGE. FOR CANADA S FUTURE Enabling excellence, building partnerships, connecting research to canadians SSHRC S STRATEGIC PLAN TO 2020

Violent Intent Modeling System

EXPLORATION DEVELOPMENT OPERATION CLOSURE

Future of Financing. For more information visit ifrc.org/s2030

#Renew2030. Boulevard A Reyers 80 B1030 Brussels Belgium

LEADING DIGITAL TRANSFORMATION AND INNOVATION. Program by Hasso Plattner Institute and the Stanford Center for Professional Development

REBELMUN 2018 COMMISSION ON SCIENCE AND TECHNOLOGY FOR DEVELOPMENT

Canada s Robotics Moment

A Roadmap for Connected & Autonomous Vehicles. David Skipp Ford Motor Company

OVERVIEW OF ARTIFICIAL INTELLIGENCE (AI) TECHNOLOGIES. Presented by: WTI

THE AMERICAN INTELLECTUAL PROPERTY LAW ASSOCIATION RECOMMENDATIONS REGARDING QUALIFICATIONS FOR

Oil & Gas. GST Engineering

UNCTAD Ad Hoc Expert Meeting on the Green Economy: Trade and Sustainable Development Implications November

Stanford Center for AI Safety

Understanding DARPA - How to be Successful - Peter J. Delfyett CREOL, The College of Optics and Photonics

Integrated Reporting WG

EY Global Family Business Summit

Instrumentation and Control

British Columbia s Environmental Assessment Process

Data ethics: digital dilemmas for the 21st century board

Transcription:

How Explainability is Driving the Future of Artificial Intelligence A Kyndi White Paper

2 The term black box has long been used in science and engineering to denote technology systems and devices that function without divulging their inner workings. The inputs and outputs of the black box system may be visible, but the actual implementation of the technology is opaque, hidden from understanding or justifiability. The black box concept has been exploited by the likes of Silicon Valley start-ups to Wall Street investment firms, usually in their efforts to protect intellectual property and maintain competitiveness. We ve developed this powerful new algorithm to generate awesome results and returns for you, but don t ask us how it works or why. Just trust us. But, just trust us is not cutting it anymore as new technologies such as artificial intelligence (AI) are seeping into virtually every facet of life. As AI becomes an increasingly essential part of how organizations of all types and sizes operate, there is a growing recognition that the old black box approach used by technology companies (including AI providers) is not sufficient or appropriate. The fact is, many companies doing business in highly regulated sectors as well as governmental entities that operate under constant oversight scrutiny, need to be able to explain the how s and why s of AIgenerated results. In many cases, the law mandates this level of openness and accountability. A November 2017 commentary in the Wall Street Journal outlined the growing concerns about the AI black box : Everyone wants to know: Will artificial intelligence doom mankind or save the world? But this is the wrong question. In the near future, the biggest challenge to human control and acceptance of artificial intelligence is the technology s complexity and opacity, not its potential to turn against us like HAL in 2001: A Space Odyssey. This black box problem arises from the trait that makes artificial intelligence so powerful: its ability to learn and improve from experience without explicit instructions. The MIT Technology Review recently published an article on this same topic, highlighting the growing demand for AI solutions whose results are explainable and auditable. The article quotes an executive from a leading financial company, who requires explainability in his AI solutions as a matter of regulatory compliance:

3 Adam Wenchel, vice president of machine By creating explainable AI solutions, Kyndi is learning and data innovation at Capital also helping to mitigate the human bias that One, says the company would like to use can arise in the process of extracting deep learning for all sorts of functions, knowledge and answers from data. including deciding who is granted a credit card. But it cannot do that because the law requires companies to explain the reason for any such decision to a prospective customer. Late last year Capital One created a research team, led by Wenchel, dedicated to finding ways of making these computer techniques more explainable. Ryan Welsh, Founder and CEO of Kyndi, a Silicon Valley-based AI solutions company, believes that the technology industry must The Wall Street Journal s commentary weighed in on the value of creating AI that is both accountable and explainable: step up its efforts to embrace explainable A better solution is to make artificial AI and make its results more explainable intelligence accountable. The concepts of and auditable. Kyndi is building the first accountability and transparency are Explainable AI platform for government, sometimes conflated, but the former does financial services, and healthcare. not involve disclosure of a system s inner Our mission is to build Explainable AI workings. Instead, accountability should products and solutions that help to optimize human cognitive performance. A cornerstone of that mission is never to operate as a black box, said Welsh. Explainable AI means that the system can justify it s reasoning. Kyndi s product exists because Deep Learning is a black box and cannot be used in regulated industries where organizations are required to explain the reasons for any advice on any decision. include explainability, confidence measures, procedural regularity, and responsibility. Explainability ensures that nontechnical reasons can be given for why an artificialintelligence model reached a particular decision. Confidence measures communicate the certainty that a given decision is accurate. Procedural regularity means the artificial-intelligence system s

4 decision-making process is applied in the to produce autonomous systems that will same manner every time. And responsibility perceive, learn, decide, and act on their ensures individuals have easily accessible own. However, the effectiveness of these avenues for disputing decisions that systems is limited by the machine s current adversely affect them. inability to explain their decisions and actions to human users. The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI especially explainable machine learning will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation US Government Advancing Explainable AI Through Major DARPA Project The US Defense Department of Defense (DOD) is pushing Explainable AI because it of artificially intelligent machine partners. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that: Produce more explainable models, cannot invest in technology black boxes while maintaining a high level of based solely on the promise of trust us. learning performance (prediction The DOD s Defense Advanced Research accuracy); and Projects Agency (DARPA) has responded to the growing need for greater explainability in AI by launching a major Explainable AI research project. Here is how DARPA describes the rationale for its groundbreaking Explainable AI program: Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding

5 of how they will behave in the future. The to target individuals with services, penalties, strategy for achieving that goal is to or police resources. develop new or modified machine-learning techniques that will produce more explainable models. Explainable AI Initiatives On the Rise Worldwide On Tuesday a European Commission working group on data protection released draft guidelines on automated decision making, including that people should have the right to challenge such decisions. The group s report cautioned that automated A recent Wired article examined how decision-making can pose significant risks government entities across the US and for individuals rights and freedoms which around the world have come to the same require appropriate safeguards. Its conclusion as DARPA. They have realized guidance will feed into a sweeping new data that the old AI black box is neither protection law due to come into force in appropriate nor, in many cases, legal, and 2018, known as the GDPR. that AI results need to be explainable and justifiable. The Wired story, AI Experts Want to End 'Black Box' Algorithms in Government, reported on the broad range of Explainable AI initiatives that are now cropping up around the world: On Sunday the UK government released a review that examined how to grow the country s AI industry. It includes a recommendation that the UK s data regulator develop a framework for explaining decisions made by AI systems. On Monday New York s City Council debated a bill that would require city agencies to publish the source code of algorithms used Trust and Regulatory Compliance Driving Growing Demand for Explainable AI To realize AI s full potential, trust is crucial. Trust comes from understanding and being able to justify the reasoning behind

6 an AI system s conclusions and results. Kyndi believes that Explainable AI achieves the level of trust that is so important for accelerated growth and acceptance of AI. Crucially, it does so without the all too familiar black box approach. For Kyndi, Explainable AI means that its software s reasoning is apparent to the user. This visibility allows them to have confidence in the system s outputs, be aware of any uncertainties, anticipate how the software will work in the future, and know how to improve the system. Such knowledge is essential to confident analysis and decision making. Explainable AI is also necessary to provide a natural feedback mechanism so that users can tailor the results to their needs. Because users know why the system produced specific outputs, they will also know how to make the software smarter. Using a process called calibration, Kyndi s customers can teach the software to produce better results in the future. Explainable AI thus becomes the foundation for ongoing iteration and improvement between human and computer. Kyndi s novel approach to AI, which unifies probabilistic and logical methods, was built with explainability as a fundamental requirement. A critical function of the software is to answer questions, recognize similarities, and find analogies rapidly. These features enable Kyndi to build models that are made up of a series of questions, for which the software attempts to generate answers from the data provided by customers. Kyndi s solutions justify their reasoning by pointing to specific instances in user data and highlighting the relevant words and phrases. By providing auditability, government and enterprise users can confidently assess the results when applying them to further analysis or to make immediate decisions. All this information is readily available through Kyndi s user-friendly interface. Kyndi s Explainable AI software is especially relevant to regulated sectors government, financial services, and healthcare where organizations are required to explain the reason for any decision. Because Kyndi s software logs every step of its reasoning process, users can transform regulated business functions with AI. And they will always do so with the knowledge that Kyndi s AI system allows them to justify their decisions when necessary.

7 Underscoring its Explainable AI Leadership, Kyndi Named to AI 100 for 2018 Kyndi Founder and CEO Ryan Welsh In recognition for its leadership and Being named to CB Insights AI 100 is an innovation in Explainable AI, Kyndi was incredible honor. It is a major industry recently named to the prestigious AI 100 for recognition, and I think it underscores the 2018. Sponsored by CB Insights, the Second importance of moving past black box Annual AI 100 honors a select group of machine learning towards Explainable AI promising private companies working on products that have auditable reasoning groundbreaking artificial intelligence capabilities. Explainability is especially technology. Kyndi and the other AI crucial for critical organizations that are companies selected for this year s AI 100 required to explain the reason for any were culled from a group of more than decision. commented on Kyndi s naming to the 2018 AI 100: 1,000 technology firms. Here is how CB Insights summed up Kyndi s achievements in its recent AI 100 news release: Founded in 2014, Kyndi transforms business processes by offering auditable AI products. Its novel approach to AI, which unifies probabilistic and logical methods, enables organizations to analyze massive amounts of data to create actionable knowledge significantly faster and without having to sacrifice explainability. Kyndi s Explainable AI Platform supports the following solutions: Intelligence, Defense, Compliance (i.e., for financial services and healthcare), and Research. Explainability is the Future of AI Right Now Explainability is at the core of Kyndi s breakthrough AI products and solutions. Explainability allows users to have confidence in the AI system s outputs, be aware of any uncertainties, anticipate how

8 the software will work in the future, and know how to improve the system. Such knowledge is essential to confident analysis and decision making. It s what gives Kyndi s customers a strong competitive edge. For more information on Kyndi s Explainable AI products and solutions, visit www.kyndi.com or call (650) 437-7440. About Kyndi Kyndi is an artificial intelligence company that s building the first Explainable AI platform for government, financial services, and healthcare. Kyndi transforms business processes by offering auditable AI systems. Our product exists because critical organizations cannot use black box machine learning when they are required to explain the reason for any decision. Based in Silicon Valley, Kyndi is backed by leading venture investors.