Ethics Guideline for the Intelligent Information Society

Similar documents
Asilomar principles. Research Issues Ethics and Values Longer-term Issues. futureoflife.org/ai-principles

COUNTRY: Questionnaire. Contact person: Name: Position: Address:

OECD WORK ON ARTIFICIAL INTELLIGENCE

National approach to artificial intelligence

Fiscal 2007 Environmental Technology Verification Pilot Program Implementation Guidelines

RECOMMENDATIONS. COMMISSION RECOMMENDATION (EU) 2018/790 of 25 April 2018 on access to and preservation of scientific information

Artificial intelligence and judicial systems: The so-called predictive justice

Seoul Initiative on the 4 th Industrial Revolution

The BGF-G7 Summit Report The AIWS 7-Layer Model to Build Next Generation Democracy

Our position. ICDPPC declaration on ethics and data protection in artificial intelligence

APEC Internet and Digital Economy Roadmap

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Overview June, 2017

The Information Commissioner s response to the Draft AI Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence

IS STANDARDIZATION FOR AUTONOMOUS CARS AROUND THE CORNER? By Shervin Pishevar

Extract of Advance copy of the Report of the International Conference on Chemicals Management on the work of its second session

Pan-Canadian Trust Framework Overview

Ten Principles for a Revised US Privacy Framework

Towards a Magna Carta for Data

The Fourth Industrial Revolution in Major Countries and Its Implications of Korea: U.S., Germany and Japan Cases

Robert Bond Partner, Commercial/IP/IT

Executive Summary Industry s Responsibility in Promoting Responsible Development and Use:

Market Access and Environmental Requirements

WSIS+10 REVIEW: NON-PAPER 1

General Questionnaire

LAW ON TECHNOLOGY TRANSFER 1998

Computer Ethics. Dr. Aiman El-Maleh. King Fahd University of Petroleum & Minerals Computer Engineering Department COE 390 Seminar Term 062

Media Literacy Policy

Societal and Ethical Challenges in the Era of Big Data: Exploring the emerging issues and opportunities of big data management and analytics

ITAC RESPONSE: Modernizing Consent and Privacy in PIPEDA

A/AC.105/C.1/2014/CRP.13

Latin-American non-state actor dialogue on Article 6 of the Paris Agreement

The EFPIA Perspective on the GDPR. Brendan Barnes, EFPIA 2 nd Nordic Real World Data Conference , Helsinki

COMMISSION OF THE EUROPEAN COMMUNITIES COMMISSION RECOMMENDATION

MISSISSAUGA LIBRARY COLLECTION POLICY (Revised June 10, 2015, Approved by the Board June 17, 2015)

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT. pursuant to Article 294(6) of the Treaty on the Functioning of the European Union

ITI Comment Submission to USTR Negotiating Objectives for a U.S.-Japan Trade Agreement

POLICY BRIEF. Regulatory Foresight and Change Drivers. Andrew MacDonald, Paul De Civita, André Downs 1 Policy Research Initiative.

Brief to the. Senate Standing Committee on Social Affairs, Science and Technology. Dr. Eliot A. Phillipson President and CEO

COMEST CONCEPT NOTE ON ETHICAL IMPLICATIONS OF THE INTERNET OF THINGS (IoT)

REPORT ON THE INTERNATIONAL CONFERENCE MEMORY OF THE WORLD IN THE DIGITAL AGE: DIGITIZATION AND PRESERVATION OUTLINE

SMART CITY VNPT s APPROACH & EXPERIENCE. VNPT Group

Key points for a Federal Government Strategy on Artificial Intelligence

Artificial Intelligence and Society: the Challenges Ahead Yuko Harayama Executive Member Council for Science, Technology and Innovation (CSTI)

Committee on Development and Intellectual Property (CDIP)

Non-ferrous metals manufacturing industry: vision for the future and actions needed

Primary IVF Conditions for Registration For Assisted Reproductive Treatment Providers under the Assisted Reproductive Treatment Act 2008

Consultation on the licensing of spectrum in the 800 MHz and 900 MHz bands

March 27, The Information Technology Industry Council (ITI) appreciates this opportunity

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Overview April, 2017

The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems

Artificial Intelligence: open questions about gender inclusion

Delhi High Level Conference on Climate Change: Technology Development and Transfer Chair s Summary

CONSENT IN THE TIME OF BIG DATA. Richard Austin February 1, 2017

mathematics and technology, including through such methods as distance

Encouraging Economic Growth in the Digital Age A POLICY CHECKLIST FOR THE GLOBAL DIGITAL ECONOMY

Masao Mukaidono Emeritus Professor, Meiji University

BOTSWANA INTERNET GOVERNANCE FORUM (IGF) DISCUSSION PAPER

Big Data & AI Governance: The Laws and Ethics

Paris, UNESCO Headquarters, May 2015, Room II

Science Impact Enhancing the Use of USGS Science

IV/10. Measures for implementing the Convention on Biological Diversity

November 18, 2011 MEASURES TO IMPROVE THE OPERATIONS OF THE CLIMATE INVESTMENT FUNDS

Children s rights in the digital environment: Challenges, tensions and opportunities

NZFSA Policy on Food Safety Equivalence:

Lorenza Jachia Secretary, Working Party on Regulatory Cooperation and Standardization Policies, UN Economic Commission for Europe

SUSTAINABILITY MATERIALITY OVERVIEW

Information Sociology

An Essential Health and Biomedical R&D Treaty

Section 1: Internet Governance Principles

8365/18 CF/nj 1 DG G 3 C

Privacy Policy Framework

INTRODUCTION TO THE RESULTS OF THE IMO PUBLIC CONSULTATION ON ADMINISTRATIVE REQUIREMENTS IN MARITIME REGULATIONS

USTR NEWS UNITED STATES TRADE REPRESENTATIVE. Washington, D.C UNITED STATES MEXICO TRADE FACT SHEET

Towards Trusted AI Impact on Language Technologies

NCRIS Capability 5.7: Population Health and Clinical Data Linkage

Submission to the Productivity Commission inquiry into Intellectual Property Arrangements

UNIVERSAL SERVICE PRINCIPLES IN E-COMMUNICATIONS

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Metrology in the Digital Transformation

International Efforts for Transparency and Confidence-Building Measures (TCBM) and Japan s Contribution

Enabling ICT for. development

1 What is Standardization? 2 What is a standard? 3 The Spanish Association for Standardization, UNE

The 45 Adopted Recommendations under the WIPO Development Agenda

Policies for the Commissioning of Health and Healthcare

Art Glowka ( )

EFRAG s Draft letter to the European Commission regarding endorsement of Definition of Material (Amendments to IAS 1 and IAS 8)

ORGANISATION FOR THE PROHIBITION OF CHEMICAL WEAPONS ADDRESS BY AMBASSADOR AHMET ÜZÜMCÜ DIRECTOR-GENERAL

Japan s FinTech Vision

Operational Objectives Outcomes Indicators

WIPO Development Agenda

1. Recognizing that some of the barriers that impede the diffusion of green technologies include:

The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, United Kingdom; 3

The Riga Declaration on e-skills A call to action on digital skills and job creation in Europe

UNITED NATIONS FRAMEWORK CONVENTION ON CLIMATE CHANGE DEVELOPMENT AND TRANSFER OF TECHNOLOGIES (DECISION 13/CP.1) Submissions by Parties

COUNCIL OF THE EUROPEAN UNION. Brussels, 9 December 2008 (16.12) (OR. fr) 16767/08 RECH 410 COMPET 550

Japan s Initiative for the Science of Science, Technology and Innovation Policy and Human Resource Development Program

Denmark as a digital frontrunner

Machines can learn, but what will we teach them? Geraldine Magarey

AI AS A FORCE OF GOOD

ARTIFICIAL INTELLIGENCE TRENDS AND POLICY ISSUES

Transcription:

Ethics Guideline for the Intelligent Information Society April 2018 Digital Culture Forum

CONTENTS 1. Background and Rationale 2. Purpose and Strategies 3. Definition of Terms 4. Common Principles 5. Guidelines 6. Policy Tasks Annex: Guidelines Categorized into Each Stakeholder 2

1. Background and Rationale The fourth industrial revolution is expected to fundamentally transform the social, economic and industrial structures, labor environment and the way of individuals life. In particular, the latest technologies emerging in this era of the fourth industrial revolution, such as the artificial intelligence (AI), are expected to bring not only significant benefits but also negative changes in the social, economic and technological sectors of our lives. As the machine-based algorithms are projected to cause profound effects on the norms and values judgement and behaviors of humans, the existing system of norms and values will also transform fundamentally. In addition, rising concerns on the side effects from using intelligent information technologies for the industrial and social sectors have sparked normative discussions regarding the scope of rights and responsibilities of stakeholders. Therefore, the need was raised to publicly discuss about the ethical standards so as to minimize social resistance and disorder against the universal use of intelligent information technologies and services. There are a wide range of ethical guidelines for using intelligent information technologies at home and abroad, which are mostly developed by scientists or law experts. Now, it is necessary to make ethical responses through collaboration between the public and private sectors, including the government. While most of the current ethical guidelines focus on reinforcing the ethics of developers and providers responsibilities in the process of technology development, it is urgently needed to develop guidelines, systems and educational programs also for preventing harms from misuse of such technologies and services. The current policies of Korea also still focus on discussing ethical standards to prevent side effects from promotion of the digital industry, and a more comprehensive approach is now required that would encompass detailed standards for each stakeholder in the intelligent information society. 2. Purpose and Strategies The Ethics Guideline for the Intelligent Information Society is aimed at reinforcing ethics of responsibilities in developing and providing intelligent information technologies and services as well as preventing their misuse by users, ultimately to achieve the humanoriented intelligent information society. The Guidelines shall follow the precautionary principle to protect the social system from potential risks of intelligent information technologies and achieve the human-oriented intelligent information society. In this regard, the guidelines shall define ethical standards that would mitigate risks or side effects of the technologies and serve the purpose of expanding human welfare, rights and liberty. Also by setting out detailed behavioral standards where necessary, the guidelines will 3

contribute to creating a self-regulatory environment for the related industries. These standards shall not hinder the growth of R&D and industries or impose unfair burden on developers or providers. The Guidelines shall contribute to facilitating citizens or users participation and reinforcing their rights and capacities. Under the principles set out in the Guidelines, intelligent information technologies shall harmonize with humans in terms of the moral values and ethical principles and thus, ultimately helping humans. 3. Definition of Terms Intelligent information society refers to a society that creates values in all areas of the industry, economy, society/culture, and public administration and that moves forward through the intelligent informatization Intelligent information technologies refers to one or any combination of the following technologies or any that uses one or combination of the following technologies: Technologies that perform learning, inference and decision-making through an electronic means Technologies that process data between objects or between an object and a human, or technologies that allow use, control or management of an object (M2M) Cloud computing technology and other technologies for data collection, analysis and processing Mobile or fixed-mobile technologies for the hyper-connected intelligent network Intelligent information services refers to: Services defined in the Article 2-6 of the Telecommunications Business Act, or acts of using such services to provide information or mediate provision of information Services using intelligent information technologies Other services that enable the intelligent informatization Developers refers to persons who study, design and develop products or services by using intelligent information technologies. Developers include researchers who develop new intelligent information technologies through academic research and professionals who are requested or hired by providers to develop intelligent information technologies and achieve such technologies in specific products. Providers refers to those who provide products or services that use intelligent information technologies for market supply or public services. Providers include the secondary providers who use and process intelligent information technologies or services to provide products or services. 4

Users refers to persons who use the products and services that are developed upon the intelligent information technologies. Users include those who directly use the products or services in their work or lives (direct users), those who consume services that include such technologies without awareness (indirect users), and all citizens who live their lives in the environment influenced by intelligent information technologies. 4. Common Principles Selecting the common principles The information society before had ethics for the traditional industrial society, information society (Internet ethics), and software developers all exist at the same time. However, the intelligent information society has seen the introduction of new technologies, such as the smart and self-ruling AI, resulting in amplification of the existing adverse effects and emergence of the new ones. 5

The complexities of intelligent information technologies, including their contribution to the universal welfare and social changes as well as their self-learning and evolving features, raise the need to establish ethical principles ensuring publicness, accountability, controllability and transparency. Common principles (PACT) P - Publicness Intelligent information technologies shall be helpful for as many people as possible and the economic prosperity brought by intelligent information technologies shall be shared widely for the benefit of all mankind. (Related concepts: fairness, elimination of discrimination and ensuring accessibility) A Accountability There must be clear distribution of responsibilities for any incidents related to intelligent information technologies and services and each stakeholder must perform the given social duties such as sharing safety information and protecting user rights. (Related concepts: responsibility, implementation of ethical procedures, and risk prevention) C Controllability Measures for human control of the intelligent information technologies and services and preparatory measures against malfunction must be established, with the users right of option guaranteed at the maximum level. (Related concepts: possibility of control, risk management, and user autonomy) 6

T - Transparency Opinions of the users, consumers and citizens must be reflected as much as possible in the process of making decisions for technology development, service design and product planning. Any information on potential risks in the stage of utilization of the technologies and services shall be open and shared; and the entire process of handling personal information must be done appropriately. (Related concepts: explainability, sharing risk information, and user/citizen engagement) 5. Guidelines (1) Publicness [Guidelines for developers] P1. Elimination of social discriminatory elements in technology development Developers must eliminate any element of discrimination by gender, race, religion, region and ethnicity from the entire process of developing intelligent information technologies and services, and prioritize development of technologies that will contribute to the universal welfare of mankind. P2. Ensuring accessibility to protect the socially disadvantaged Developers must guarantee accessibility for the socially vulnerable and disadvantaged groups who are likely to be neglected or alienated from the intelligent information society. P3. Ensuring publicness of intelligent information technologies Developers shall make efforts to develop intelligent information technologies that have public features so they can be used to solve social issues. [Guidelines for providers] P4. Supplying products that meet the public interests Planning, distribution and utilization of products must lead to positive results serving the public interest. P5. Order placement based on good intentions Providers must place orders based on good intentions for intelligent information technology and service development, and have continued interest in solving social issues through the use of such technologies. P6. Balance between commercial profits and public contributions Providers shall consider the social impact of intelligent information technologies and services and try to make it as a rule to achieve a balance between commercial (private) profits and public contributions. 7

[Guidelines for users] P7. Prohibition of use with malicious intention Users shall not use intelligent information technologies and services to serve malicious purposes and make sure their use of products does not violate others freedom of product use. P8. Participation to improve products for the public good Users shall participate in providing fair reviews of intelligent information technologies and services and providing suggestions for their improvement. P9. Compliance with consumer behavior principles at all times Users shall make it a routine to follow the principles of consumer behaviors when purchasing or using intelligent information technologies or products (services), specifically by checking the terms and conditions or user manuals. (2) Accountability [Guidelines for developers] A1. Sharing of responsibilities Developers must share responsibilities and results throughout the entire life cycle of a product, from development to utilization. A2. Continuous participation in information sharing and technology refreshes Developers must develop intelligent information technologies and services in a fair and faithful manner based on the requirements from the ordering parties and continue to join the information exchange among developers and technology refreshes. A3. Meeting quality certification standards Developers shall make efforts to develop products that meet the standards for quality assurance of home and abroad. A4. Research and development based on faithful implementation of ethical procedures Developers shall faithfully follow the ethical procedures pertaining to the research and development in order for safer development of intelligent information technologies and services. [Guidelines for providers] A5. Responsibility sharing and establishment of compensation and liability principles Providers must share responsibilities for any social harms resulting from distribution, expansion and utilization of intelligent information technology products (services) and 8

establish fair and reasonable compensation and liability principles in preparation for malfunction or incidents. A6. Ensuring user rights Providers must understand user rights, do their best to guarantee them, and provide information and education necessary for accurate utilization of products. A7. Defining conditions and scope of machine-based decision-making in intelligent information services Providers must define a set of strict conditions and methodologies of machine-based decision-making delegated by humans in intelligent information services. A8. Participation in public discussions for risk prevention Providers must continue to have interest in the impact of intelligent information technologies and services on humans and the society and actively participate in public discussions to develop measures against risks. [Guidelines for users] A9. Understanding users ethical responsibilities Users must be aware of the impact of their product (service) use on others rights or safety and understand their responsibilities. A10. Right to raise liability issues Users shall continue to show interest in the methods and impact of operation of intelligent information technologies and services and may raise liability issues pertaining to developers and providers regarding any user right violation or safety incident. A11. Right to request sharing of safety information and its institutionalization Upon understanding the impact of their product use on the social customs and culture, users may request for safety information sharing and its institutionalization. A12. Compliance with guidelines when product change, renewal and disposal Users shall pay close attention and follow given guidelines when changing, renewing, or disposing any product (service) so as to prevent side effects. (3) Controllability [Guidelines for developers] C1. Comprehensive review on exceptional cases Developers must perform fundamental and comprehensive review on various issues that may arise not only in usual cases but also in exceptional cases. 9

C2. Implementation of continuous quality management Developers must continuously perform quality management on developed products in order to maintain human controllability of technologies and services. C3. Development of technical control systems In regards to ensuring safety and preventing incidents against any technical malfunction or risk in AI algorithms, developers must build a technical control system, which can unconditionally bring the operation to a stop, from the early stage of development. [Guidelines for providers] C4. Risk control in the process of product distribution Providers shall identify risk elements that can be found in the process of product distribution, conduct full-scale prior verification, and also develop measures through which humans can reject the choices made by machines. C5. Establishment of measures for safety verification and control Providers must actively engage in discussions on standardizing the procedures and standards for safety verification of intelligent information technology products (services) and establish safety and control measures regarding self-replication or improvement of systems. C6. Ensuring users right of option Providers must guarantee to the maximum extent the users right to choose the intelligent information technology products (services) that they want to use. [Guidelines for users] C7. Reinforcement of competence to use intelligent information technologies Users shall make efforts to obtain relevant information and continue learning so they can understand the properties of intelligent information technologies and have proactive control over them. C8. Prohibition of arbitrary operation Users shall not operate intelligent information technology products (services) in an arbitrary manner beyond the permitted limit in order for the results of product (service) utilization to be predictable. (4) Transparency [Guidelines for developers] 10

T1. Provision of necessary data in emergency situations In emergency situations requiring collaboration among stakeholders, developers must share necessary data and cooperate with other stakeholders to find solutions. T2. Prohibition of developing concealed functions Developers must not develop any concealed function unknown to the ordering parties in order to prevent users privacy infringement and safety risks in advance. T3. Proactive risk prediction and information sharing with providers Developers shall make efforts to predict different consequences and side effects caused by operation or use of a particular product (service) from both broad and narrow perspectives and share the results with providers when necessary. [Guidelines for providers] T4. Sharing risk information with users Upon recognizing any identified or potential risk in the product (service) distribution and utilization, providers must notify users and the general public and share the relevant information. T5. Prohibition of unjust utilization of user information Providers must not utilize the information of users or the third parties in an unjust manner. T6. Application of social impact assessment results Providers must assess in advance the negative impact of a product (service) on any area that is directly linked to the quality of human life and apply the results into technology development and product (service) design. [Guidelines for users] T7. Right to request for explanations Users may request developers or providers to explain the process of how a particular algorithm has been developed for a specific product (service). T8. Duties as monitors of personal information use Users shall be aware of the fact that intelligent information technologies collect and process a wide range of personal information and thus examine the process to make sure the personal information is handled in an appropriate manner. T9. Duty of sharing consumer information Users must pay attention to abnormalities and side effects of product (service) use and share the experiences and information obtained as consumers in a responsible and appropriate manner. 11

6. Policy Tasks The followings are policy tasks to be carried out in the future: Developing a healthy research culture and building home and abroad cooperation networks (Creating a healthy research culture) Cooperation, trust and transparent culture shall be built among researchers and developers in the area of intelligent information technologies. (Linking technical development to policies) There should be constructive and sound exchanges among researchers, developers, providers and policy makers. (Global cooperation) As the ethical issues regarding intelligent information technologies and services have global influences, more efforts shall be made to establish international norms based on cooperation with international organizations, multinational businesses and global NGOs. Developing laws and regulations to facilitate public development and prevent risks (Establishing and operating a governance framework) It is necessary to establish the (tentatively named) Ethics Commission for Intelligent Information Service Users to ensure that the decision making process of technology development, service design and product planning allows users to engage with retaining the public good. (Need for policy incentives) Support shall be delivered for policies that encourage useful and beneficial technologies to humans to be developed and diffused first. (Establishment of a scheme for social impact assessment) A scheme should be established to assess the social impact of technologies and products in order to prevent risks in advance. (Establishment of laws and regulations for user rights protection) Laws and regulations should be established to define responsibilities and duties of developers and providers and protect user rights. Encouraging self-controlled use of intelligent information technologies and services through policies for preventing their misuse and building users competence (Nationwide campaigns and public relation activities) Nationwide campaigns and public relation activities should be implemented in order to prevent negative impact of social penetration of the intelligent information technologies for instance, sharing of illegal and harmful information getting smarter and distribution channels becoming diversified. (Promoting self-controlled use of technologies and services through users competence development) Users should be given support, such as the guidelines for understanding the features of intelligent information technologies and services and other training programs developed and operated, so they can improve their right of option and the capacity for self-controlled use of such technologies and services. 12

ANNEX. Guidelines Categorized into Each Stakeholder The guidelines developed upon the four common principles, or PACT, can be reorganized into developers, providers and users perspectives for each application stage. Stakeholder Stage Guidelines in Detail Developers (13) Providers (13) Demand analysis (2) Product development (9) Utilization support (2) Demand analysis (6) Product development (1) Provision and distribution (3) Utilization support (3) P3. Ensuring publicness of intelligent information technologies (P) A1. Sharing of responsibilities (A) A4. Research and development based on faithful implementation of ethical procedures (A) P1. Elimination of social discriminatory elements in technology development (P) P2. Ensuring accessibility to protect the socially disadvantaged (P) A3. Meeting quality certification standards (A) C3. Development of technical control systems (C) C1. Comprehensive review on exceptional cases (C) T2. Prohibition of developing concealed functions (T) T3. Proactive risk prediction and information sharing with providers (T) A2. Constant participation in information sharing and technology refreshes (A) C2. Implementation of continuous quality management (C) T1. Provision of necessary data in emergency situations (T) P4. Supplying products that meet the public interest (P) P6. Balance between commercial profits and public contributions (P) A7. Defining conditions and scope of machine-based decision-making in intelligent information services (A) C5. Establishment of measures for safety verification and control (C) C6. Ensuring users right of option (C) T6. Application of social impact assessment results (T) P5. Order placement based on good intentions (P) A5. Responsibility sharing and establishment of compensation and liability principles (A) A6. Ensuring user rights (A) C4. Risk control in the process of product distribution (C) T4. Sharing risk information with users (T) T5. Prohibition of unjust utilization of user information (T) A8. Participation in public discussions for risk prevention (A) 13

Users (12) Demand analysis (1) Provision and distribution (3) Utilization support (8) C7. Reinforcement of capacity to use intelligent information technologies (C) P9. Compliance with consumer behavior principles at all times (P) A9. Understanding users ethical responsibilities (A) T7. Right to request for explanations (T) P7. Prohibition of use with malicious intention (P) C8. Prohibition of arbitrary operation (C) A12. Compliance with guidelines when product change, renewal and disposal (A) T9. Duty of sharing consumer information (T) P8. Participation to improve products for the public good (P) A10. Right to raise liability issues (A) A11. Right to request sharing of safety information and its institutionalization (A) T8. Duties as monitors of personal information use (T) (P) Publicness; (A) Accountability; (C) Controllability; (T) - Transparency 14