Logicist Machine Ethics Can Save Us

Similar documents
Unethical but Rule-Bound Robots Would Kill Us All

Only a Technology Triad Can Tame Terror

Rensselaer AI & Reasoning (RAIR) Lab

Akratic Robots and the Computational Logic Thereof

Propositional Calculus II: More Rules of Inference, Application to Additional Motivating Problems

Thoughts on: Robotics, Free Will, and Predestination

Could an AI Ever Be the World s Best Crossword Puzzle Solver?

JOHN LICATO. (808) Assistant Professor Department of Computer Science and Engineering University of South Florida

A 21st-Century Ethical Hierarchy for Robots and Persons: EH

Because Strong AI is Dead, Test-Based AI Lives

Research at the Human-Robot Interaction Laboratory at Tufts

The Singularity, the MiniMaxularity, and Human Development (Part II)

arxiv: v1 [cs.ai] 20 Feb 2015

Logical Agents (AIMA - Chapter 7)

11/18/2015. Outline. Logical Agents. The Wumpus World. 1. Automating Hunt the Wumpus : A different kind of problem

Cambridge University Press Machine Ethics Edited by Michael Anderson and Susan Leigh Anderson Frontmatter More information

Tentacular Artificial Intelligence, and the Architecture Thereof, Introduced

Presented By: Bikash Chandra ( ) Kaustav Das ( ) 14 November 2010

Ethics in Artificial Intelligence

The Multi-Mind Effect

Deceptive & Counter-Deceptive Machines.5

CS 305: Social, Ethical and Legal Implications of Computing

Chapter 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy?

Toward a General Logicist Methodology for Engineering Ethically Correct Robots

Governing Lethal Behavior: Embedding Ethics in a Hybrid Reactive Deliberative Architecture

Artificial intelligence & autonomous decisions. From judgelike Robot to soldier Robot

LECTURE 1: OVERVIEW. CS 4100: Foundations of AI. Instructor: Robert Platt. (some slides from Chris Amato, Magy Seif El-Nasr, and Stacy Marsella)

Instilling Morality in MachinesMultiagent Experiments. David Burke Systems Science Seminar June 3, 2011

PAGI World: A Physically Realistic, General-Purpose Simulation Environment for Developmental AI Systems

Introduction to Artificial Intelligence: cs580

Modal logic. Benzmüller/Rojas, 2014 Artificial Intelligence 2

Position Paper: Ethical, Legal and Socio-economic Issues in Robotics

Artificial Intelligence

15: Ethics in Machine Learning, plus Artificial General Intelligence and some old Science Fiction

Artificial Intelligence. What is AI?

CS 1571 Introduction to AI Lecture 1. Course overview. CS 1571 Intro to AI. Course administrivia

Getting on the same page

On Creative Self-Driving Cars: Hire the Computational Logicians, Fast

Computer Science and Philosophy Information Sheet for entry in 2018

COMS 493 AI, ROBOTS & COMMUNICATION

Ethics and Cognitive Systems

ensuring the safety of the people who share their space with AI systems; developing operational standards and certification methods for such systems;

Tip. Getting on the same page. More topics - my favorites. Basic CSC 300 Issue List. Readings - make sure you keep up General Ethical concepts

Two Perspectives on Logic

Ethics of AI: a role for BCS. Blay Whitby

CS360: AI & Robotics. TTh 9:25 am - 10:40 am. Shereen Khoja 8/29/03 CS360 AI & Robotics 1

CSC 550: Introduction to Artificial Intelligence. Fall 2004

How Can Robots Be Trustworthy? The Robot Problem

Backward Induction. ISCI 330 Lecture 14. March 1, Backward Induction ISCI 330 Lecture 14, Slide 1

Implementing Asimov s First Law of Robotics

Verifiable Autonomy. Michael Fisher. University of Liverpool, 11th September 2015

A Modified Perspective of Decision Support in C 2

Artificial Intelligence

COS402 Artificial Intelligence Fall, Lecture I: Introduction

CS 730/830: Intro AI. Prof. Wheeler Ruml. TA Bence Cserna. Thinking inside the box. 5 handouts: course info, project info, schedule, slides, asst 1

Announcements. Homework 1. Project 1. Due tonight at 11:59pm. Due Friday 2/8 at 4:00pm. Electronic HW1 Written HW1

AI IN THE SKY * MATTHIAS SCHEUTZ Department of Computer Science, Tufts University, Medford, MA, USA

Coordina(on Game Tournament

Introduction to AI. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 26 Jan 2012

Welcome to CompSci 171 Fall 2010 Introduction to AI.

Game theory attempts to mathematically. capture behavior in strategic situations, or. games, in which an individual s success in

Minds and Machines spring Searle s Chinese room argument, contd. Armstrong library reserves recitations slides handouts

Human Safety Considerations in Emerging ICT Environment

ES 492: SCIENCE IN THE MOVIES

3.1 Agents. Foundations of Artificial Intelligence. 3.1 Agents. 3.2 Rationality. 3.3 Summary. Introduction: Overview. 3. Introduction: Rational Agents

Hypernetworks in the Science of Complex Systems Part I. 1 st PhD School on Mathematical Modelling of Complex Systems July 2011, Patras, Greece

Outline. What is AI? A brief history of AI State of the art

Introduction to Artificial Intelligence

1.1 What is AI? 1.1 What is AI? Foundations of Artificial Intelligence. 1.2 Acting Humanly. 1.3 Thinking Humanly. 1.4 Thinking Rationally

Formal Composition for. Time-Triggered Systems

Responsibility and Lethality for Unmanned Systems: Ethical Pre-mission Responsibility Advisement

Artificial Intelligence

Computer and Information Ethics

CSI Professional Practice In Computer Science. Ethical, social, and professional aspects of the modern computing technology.

CS 188: Artificial Intelligence Fall Administrivia

CPE/CSC 580: Intelligent Agents

Beyond business as usual : the societal, workforce and human impact of the IoT

ECE 396 Senior Design I

PROJECT LEAD The way. Quakertown community high school

Executive Summary. Chapter 1. Overview of Control

Artificial Intelligence in distribution

CS344: Introduction to Artificial Intelligence (associated lab: CS386)

QUASI-DILEMMAS FOR ARTIFICIAL MORAL AGENTS

When Formal Systems Kill. Computer Ethics and Formal Methods

Introduction to AI. What is Artificial Intelligence?

THE HONORS SEMINARS SPRING 2015

intentionality Minds and Machines spring 2006 the Chinese room Turing machines digression on Turing machines recitations

APPLICATION NOTE. Computer Controlled Variable Attenuator for Lasers. Technology and Applications Center Newport Corporation

EE-3306 HC6811 Lab #5. Oscilloscope Lab

Advanced Robotics Introduction

AI Day on Knowledge Representation and Automated Reasoning

Knowledge Enhanced Electronic Logic for Embedded Intelligence

Cover sheet. Handling the Demo Case. SINAMICS G120 with CU250S-2 Vector. FAQ October Service & Support. Answers for industry.

Philosophy. AI Slides (5e) c Lin

Aakriti Endlaw IT /23/16. Artificial Intelligence Research Paper

Artificial Intelligence

Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) April 2016, Geneva

CS 188: Artificial Intelligence Fall Course Information

Roger F. Gay: A Developer s Perspective

CMSC 372 Artificial Intelligence. Fall Administrivia

Transcription:

Logicist Machine Ethics Can Save Us Selmer Bringsjord et al. Rensselaer AI & Reasoning (RAIR) Lab Department of Cognitive Science Department of Computer Science Lally School of Management & Technology Rensselaer Polytechnic Institute (RPI) Troy, New York 12180 USA Are Humans Rational? 10/16/2017

Logicist Machine Ethics Can Save Us Selmer Bringsjord et al. Rensselaer AI & Reasoning (RAIR) Lab Department of Cognitive Science Department of Computer Science Lally School of Management & Technology Rensselaer Polytechnic Institute (RPI) Troy, New York 12180 USA Are Humans Rational? 10/16/2017

Not quite as easy as this to use logic to save the day

Logic Thwarts Nomad! (with the Liar Paradox)

The Threat If future robots behave immorally, we are killed, or worse.

The Threat If future robots behave immorally, we are killed, or worse.

The Threat If future robots behave immorally, we are killed, or worse.

The Threat If future robots behave immorally, we are killed, or worse.

At least supposedly, long term:

At least supposedly, long term: We re in very deep trouble.

At least supposedly, long term: We re in very deep trouble.

At least supposedly, long term: We re in very deep trouble.

Actually, it s quite simple: Equation for Why Stakes are High

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x)

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x)

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x) u(aia i ( j )) > + 2 Z or 2 Z

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x) u(aia i ( j )) > + 2 Z or 2 Z

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x) u(aia i ( j )) > + 2 Z or 2 Z

Actually, it s quite simple: Equation for Why Stakes are High 8x : Agents Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x) u(aia i ( j )) > + 2 Z or 2 Z

Actually, it s quite simple: Equation for Why Stakes are High Autonomous(x) + Powerful(x) + Highly_Intelligent(x) = Dangerous(x) 8x : Agents (We use the jump technique in relative computability.) u(aia i ( j )) > + 2 Z or 2 Z

I. Cognitive Calculi

Hierarchy of Ethical Reasoning DCEC CL DCEC ADR M U UIMA/Watsoninspired DIARC

Hierarchy of Ethical Reasoning DCEC CL DCEC ADR M U UIMA/Watsoninspired DIARC

Hierarchy of Ethical Reasoning DCEC CL DCEC ADR M U UIMA/Watsoninspired DIARC

Hierarchy of Ethical Reasoning Not deontic logics. DCEC CL DCEC ADR M U UIMA/Watsoninspired DIARC

II. Early Progress With Our Calculi: Non-Akratic Robots

Informal Definition of Akrasia An action a f is (Augustinian) akratic for an agent A at t a f iff the following eight conditions hold: (1) A believes that A ought to do a o at t ao ; (2) A desires to do a f at t a f ; (3) A s doing a f at t a f entails his not doing a o at t ao ; (4) A knows that doing a f at t a f entails his not doing a o at t ao ; (5) At the time (t a f ) of doing the forbidden a f, A s desire to do a f overrides A s belief that he ought to do a o at t a f. (6) A does Comment: the forbidden Condition action(5) a f isathumbling, t a f ; pure and (7) A s doing a f results from A s desire to do a f ; (8) At some time t after t a f, A has the belief that A ought to have done a o rather than a f.

Informal Definition of Akrasia An action a f is (Augustinian) akratic for an agent A at t a f iff the following eight conditions hold: (1) A believes that A ought to do a o at t ao ; (2) A desires to do a f at t a f ; (3) A s doing a f at t a f entails his not doing a o at t ao ; (4) A knows that doing a f at t a f entails his not doing a o at t ao ; (5) At the time (t a f ) of doing the forbidden a f, A s desire to do a f overrides A s belief that he ought to do a o at t a f. (6) A does Comment: the forbidden Condition action(5) a f isathumbling, t a f ; pure and (7) A s doing a f results from A s desire to do a f ; (8) At some time t after t a f, A has the belief that A ought to have done a o rather than a f.

Informal Definition of Akrasia Regret An action a f is (Augustinian) akratic for an agent A at t a f iff the following eight conditions hold: (1) A believes that A ought to do a o at t ao ; (2) A desires to do a f at t a f ; (3) A s doing a f at t a f entails his not doing a o at t ao ; (4) A knows that doing a f at t a f entails his not doing a o at t ao ; (5) At the time (t a f ) of doing the forbidden a f, A s desire to do a f overrides A s belief that he ought to do a o at t a f. (6) A does Comment: the forbidden Condition action(5) a f isathumbling, t a f ; pure and (7) A s doing a f results from A s desire to do a f ; (8) At some time t after t a f, A has the belief that A ought to have done a o rather than a f.

r will note is quite far down the dimensio ty that ranges from expressive extension OL), to logics with intensional operators fo obligation (so-called philosophical logics; fo Cast in 2001). Intensional operators like these a he language for DC EC. This language this becomes

KB rs [KB m1 [ KB m2...kb mn ` D 1 : B(I,now,O(I,t a F,happens(action(I,a),t a ))) D 2 : D(I,now,holds(does(I,a),t a )) D 3 : happens(action(i,a),t a ) ) happens(action(i,a),t a ) happens(action(i,a),t a ) ) D 4 : K I, now, happens(action(i,a),t a ) D 5 : I(I,t a,happens(action(i,a),t a )^ I(I,t a,happens(action(i,a),t a ) D 6 : happens(action(i,a),t a ) D 7a : G[{D(I,now,holds(does(I,a),t))} ` happens(action(i,a),t a ) D 7b : G {D(I,now,holds(does(I,a),t))} 6` happens(action(i,a),t a ) D 8 : B I,t f,o(i,t a,f,happens(action(i,a),t a ))

Demos

Demos

III. But, a twist befell the logicists

Chisholm had argued that the three old 19th-century ethical categories (forbidden, morally neutral, obligatory) are not enough and soulsearching brought me to agreement.

heroic morally neutral deviltry civil forbidden uncivil obligatory

Leibnizian Ethical Hierarchy for Persons and Robots: EH deviltry morally neutral uncivil forbidden obligatory civil heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH deviltry morally neutral uncivil forbidden obligatory civil the supererogatory heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH deviltry morally neutral uncivil forbidden obligatory civil the supererogatory heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH the subererogatory the supererogatory deviltry morally neutral uncivil forbidden obligatory civil heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH (see Norwegian crime fiction) the subererogatory the supererogatory deviltry morally neutral uncivil forbidden obligatory civil heroic

deviltry Leibnizian Ethical Hierarchy for Persons and Robots: EH (see Norwegian crime fiction) 19th-Century Triad the subererogatory the supererogatory morally neutral uncivil forbidden obligatory civil heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH (see Norwegian crime fiction) the subererogatory the supererogatory deviltry morally neutral uncivil forbidden obligatory civil heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH (see Norwegian crime fiction) the subererogatory the supererogatory deviltry morally neutral uncivil forbidden obligatory civil heroic

Leibnizian Ethical Hierarchy for Persons and Robots: EH (see Norwegian crime fiction) the subererogatory the supererogatory deviltry morally neutral uncivil forbidden obligatory civil heroic focus of others

Leibnizian Ethical Hierarchy for Persons and Robots: EH (see Norwegian crime fiction) the subererogatory deviltry morally neutral uncivil forbidden obligatory civil the supererogatory heroic But this portion may be most relevant to military missions. focus of others

19th Century Triad

19th Century Triad

19th Century Triad

19th Century Triad

19th Century Triad

Arkin Pereira Andersons Powers Mikhail 19th Century Triad

19th Century Triad

19th Century Triad

19th Century Triad

Bert Heroically Saved? Courtesy of RAIR-Lab Researcher Atriya Sen

Bert Heroically Saved? Courtesy of RAIR-Lab Researcher Atriya Sen

2 Supererogatory Robot Action Courtesy of RAIR-Lab Researcher Atriya Sen

Courtesy of RAIR-Lab Researcher Atriya Sen

Bert Heroically Saved!! Courtesy of RAIR-Lab Researcher Atriya Sen

Bert Heroically Saved!! Courtesy of RAIR-Lab Researcher Atriya Sen

Courtesy of RAIR-Lab Researcher Atriya Sen

K (nao, t1, lessthan (payo (nao, dive, t2 ), threshold)) K (nao, t1, greaterthan (payo (nao, dive, t2 ), threshold)) K (nao, t1, O (nao, t2, lessthan (payo (nao, dive, t2 ), threshold), happens (action (nao, dive), t2 ))) ) K nao, t1, S UP2 (nao, t2, happens (action (nao, dive), t2 )) ) I (nao, t2, happens (action (nao, dive), t2 )) ) happens (action(nao, dive), t2 ) Courtesy of RAIR-Lab Researcher Atriya Sen

K (nao, t1, lessthan (payo (nao, dive, t2 ), threshold)) K (nao, t1, greaterthan (payo (nao, dive, t2 ), threshold)) K (nao, t1, O (nao, t2, lessthan (payo (nao, dive, t2 ), threshold), happens (action (nao, dive), t2 ))) ) K nao, t1, S UP2 (nao, t2, happens (action (nao, dive), t2 )) ) I (nao, t2, happens (action (nao, dive), t2 )) ) happens (action(nao, dive), t2 ) Courtesy of RAIR-Lab Researcher Atriya Sen

In Talos (available via Web interface); & ShadowProver Prototypes: Boolean lessthan Numeric Numeric Boolean greaterthan Numeric Numeric ActionType not ActionType ActionType dive Axioms: lessorequal(moment t1,t2) K(nao,t1,lessThan(payoff(nao,not(dive),t2),threshold)) K(nao,t1,greaterThan(payoff(nao,dive,t2),threshold)) K(nao,t1,not(O(nao,t2,lessThan(payoff(nao,not(dive),t2),threshold),happens(action(nao,dive),t2)))) provable Conjectures: happens(action(nao,dive),t2) K(nao,t1,SUP2(nao,t2,happens(action(nao,dive),t2))) I(nao,t2,happens(action(nao,dive),t2))

In Talos (available via Web interface); & ShadowProver Prototypes: Boolean lessthan Numeric Numeric Boolean greaterthan Numeric Numeric ActionType not ActionType ActionType dive Axioms: lessorequal(moment t1,t2) K(nao,t1,lessThan(payoff(nao,not(dive),t2),threshold)) K(nao,t1,greaterThan(payoff(nao,dive,t2),threshold)) K(nao,t1,not(O(nao,t2,lessThan(payoff(nao,not(dive),t2),threshold),happens(action(nao,dive),t2)))) provable Conjectures: happens(action(nao,dive),t2) K(nao,t1,SUP2(nao,t2,happens(action(nao,dive),t2))) I(nao,t2,happens(action(nao,dive),t2))

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 1. Pick a theory 2. Pick a code 3. Run through EH.

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 1. Pick a theory 2. Pick a code 3. Run through EH.

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Spectra

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Spectra

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 Step 3 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Ethical OS Spectra

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 Step 3 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Ethical OS Spectra

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 Step 3 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Ethical OS Spectra

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 Step 3 DIARC 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Ethical OS Spectra

Theories of Law Making Moral Machines Making Meta-Moral Machines Ethical Theories $11M Natural Law Shades of Utilitarianism Utilitarianism Deontological Divine Command Legal Codes Confucian Law Particular Ethical Codes Virtue Ethics Contract Egoism Step 1 Step 2 Step 3 DIARC 1. Pick a theory 2. Pick a code 3. Run through EH. Automate Prover Spectra Ethical OS A real military robot

V. But We Need Ethical Operating Systems

Pick the Better Future!

Pick the Better Future! Govindarajulu, N.S. & Bringsjord, S. (2015) Ethical Regulation of Robots Must Be Embedded in Their Operating Systems in Trappl, R., ed., A Construction Manual for Robots Ethical Systems (Basel, Switzerland), pp. 85 100.

Pick the Better Future! Only obviously dangerous higher-level AI modules have ethical safeguards. All higher-level AI modules interact with the robotic substrate through an ethics system. Robotic Substrate } Ethical Substrate Robotic Substrate Higher-level cognitive and AI modules Future 1 Future 2 Govindarajulu, N.S. & Bringsjord, S. (2015) Ethical Regulation of Robots Must Be Embedded in Their Operating Systems in Trappl, R., ed., A Construction Manual for Robots Ethical Systems (Basel, Switzerland), pp. 85 100.

Pick the Better Future! Walter-White calculation may go through after ethical control modules are stripped out! Only obviously dangerous higher-level AI modules have ethical safeguards. All higher-level AI modules interact with the robotic substrate through an ethics system. Robotic Substrate } Ethical Substrate Robotic Substrate Higher-level cognitive and AI modules Future 1 Future 2 Govindarajulu, N.S. & Bringsjord, S. (2015) Ethical Regulation of Robots Must Be Embedded in Their Operating Systems in Trappl, R., ed., A Construction Manual for Robots Ethical Systems (Basel, Switzerland), pp. 85 100.

Pick the Better Future! Walter-White calculation may go through after ethical control modules are stripped out! Only obviously dangerous higher-level AI modules have ethical safeguards. All higher-level AI modules interact with the robotic substrate through an ethics system. Robotic Substrate } Ethical Substrate Robotic Substrate } (& formally verify!) Higher-level cognitive and AI modules Future 1 Future 2 Govindarajulu, N.S. & Bringsjord, S. (2015) Ethical Regulation of Robots Must Be Embedded in Their Operating Systems in Trappl, R., ed., A Construction Manual for Robots Ethical Systems (Basel, Switzerland), pp. 85 100.

End (Extra slides follow.)