The tiny changes that can cause AI to fail

Similar documents
THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT

Global Standards Symposium. Security, privacy and trust in standardisation. ICDPPC Chair John Edwards. 24 October 2016

Thank you for signing up to the

All The Key Points From Busting Loose From The Money Game

Goal Setting. Cheat Sheet

Stanford Center for AI Safety

Prof. Roberto V. Zicari Frankfurt Big Data Lab RatSWD- February 9, 2017 Berlin

LESSON 8. Putting It All Together. General Concepts. General Introduction. Group Activities. Sample Deals

Artificial Intelligence and Robotics Getting More Human

KEY POINTS OF BUSTING LOOSE FROM THE BUSINESS GAME

Why Do We Need Selections In Photoshop?

Adversarial Robustness for Aligned AI

Teacher Commentary Transcript

The Home Business Cheat Sheet

Coping with Trauma. Stopping trauma thoughts and pictures THINK GOOD FEEL GOOD

HOW TO SYSTEMISE YOUR BUSINESS

LECTURE 1: OVERVIEW. CS 4100: Foundations of AI. Instructor: Robert Platt. (some slides from Chris Amato, Magy Seif El-Nasr, and Stacy Marsella)

The world s most commonly believed tall tale

Privacy, Due Process and the Computational Turn: The philosophy of law meets the philosophy of technology

Use Your Business to Grow Your Income

BONUS LESSON How To Set Goals With Kindle

Smart Passive Income Gets Critiqued - Conversion Strategies with Derek Halpern TRANSCRIPT

Prof. Roberto V. Zicari Frankfurt Big Data Lab The Human Side of AI SIU Frankfurt, November 20, 2017

Your EdVenture into Robotics 10 Lesson plans

MITI Coding: Transcript 2

The Exciting World of Bridge

252 Groups February 2015, Week 4 Small Group 2-3. Agents of K.I.N.D.

ABCD's To Building An Audience and Getting Noticed FAST: RR002

ZoneFox Augmented Intelligence (A.I.)

LANGUAGECERT IESOL Achiever Level B1 Paper

Online survey with children aged June 2017

THE SECRETS OF MARKETING VIA SOCIAL NETWORKING SITES

The Intel Science and Technology Center for Pervasive Computing

How Innovation & Automation Will Change The Real Estate Industry

TWEET LIKE A ROCKSTAR

Lesson 2: What is the Mary Kay Way?

How to get more quality clients to your law firm

Negotiations Saying yes/ no/ maybe simplest responses card game and key words

A Guide to Going Off the Grid

Committee on the Internal Market and Consumer Protection. of the Committee on the Internal Market and Consumer Protection

Our Final Invention: Artificial Intelligence and the End of the Human Era

Some thoughts on safety of machine learning

Happiness & Attitude. Kids Activities

Leading the Agenda. Everyday technology: A focus group with children, young people and their carers

Utt. # P or C. Content of Utterance. Notes. Codes

BEST PRACTICES FOR SCANNING DOCUMENTS. By Frank Harrell

What to Do In the Months Following a Serious Accident

From Anna and August

Test Booklet. Subject: LA, Grade: 04 LEAP Grade 4 Language Arts Student name:

How to Start a Blog & Use It To Squash Writer s Block

THEORY AND TECHNIQUES OF THE INTERVIEW 3. PREPARING FOR AN INTERVIEW

On the GED essay, you ll need to write a short essay, about four

AI and machine learning get us one step closer to relevance at scale

The Three Laws of Artificial Intelligence

POWER HOUR BUILDING YOUR BIZ (Time Blocking in Your Calendar for Success)

Mistake #1 Letting the Men Handle the Finances

Stepping up and Stepping out

Nixon: Hello? Operator: Secretary Rogers. Rogers: Hello. Nixon: Hello. Rogers: Hi, Mr. President. Nixon: Have you got any wars started anywhere?

Stanford CS Commencement Alex Aiken 6/17/18

A WINNING AWARDS SUBMISSION

GMAT Timing Strategy Guide

(Children s e-safety advice) Keeping Yourself Safe Online

Computational Thinking

BOSS PUTS YOU IN CHARGE!

FOUR SIMPLE TRADING GOALS

Simply Organised Life Planner

Creating Your Own Logo Suzy Ultman Audio Transcript

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Overview June, 2017

Top 10 E-Marketing Blunders

Safety and Security. Pieter van Gelder. KIVI Jaarccongres 30 November 2016

Dumpster Optics BENDING LIGHT REFLECTION

INTRODUCTION. Overview.

Artificial Intelligence

Obviously, this is after you start to get some traffic, but that is one of the steps, so I want to get that in there.

Communicating Complex Ideas Podcast Transcript (with Ryan Cronin) [Opening credits music]

Grades 3-7. Light Learning Lapbook with Study Guide. Sample Page. A Journey Through Learning

The Art of the Discard

Rhyme Time. Look at that cat! It has a hat!

MISTAKES THAT WILL SABOTAGE YOUR PRACTICE. (And How to Avoid Them)

Handling the Pressure l Session 6

Originally developed by Paul Stallard Ph.D,

Play Passive Defense

How To Pitch For New Clients (And Actually Get Responses)

Making Multidisciplinary Practices Work

Computing Disciplines & Majors

Math Matters: Why Do I Need To Know This?

{ TECHNOLOGY CHANGES } EXECUTIVE FOCUS TRANSFORMATIVE TECHNOLOGIES. & THE ENGINEER Engineering and technology

REPRODUCIBLE. Student-Friendly Scoring Guide for Established Writers A. How you explore the main point or story of your writing

How To Set Up Scoring In Salsa

Tips For Marketing Your Handmade Business On Facebook

10 Ways To Be More Assertive In Your Relationships By Barrie Davenport

Lee Cole. Welcome! Who Am I? We ve got a lot to cover, so let s get rolling!

CE213 Artificial Intelligence Lecture 1

Human-Robot Interaction. Aaron Steinfeld Robotics Institute Carnegie Mellon University

What happens if we ve paid you too much tax credit?

Not-Too-Silly Stories

6 IFTTT Fan Page Synd - Video Synd Alpha Case Study

USE MAGIC TO FIND YOUR SOUL MATE. eligiblemagazine.com

Is Self-Doubt Holding You Back? 5 Ways To Build Confidence And Banish Doubt

BOSS is heading to the door, ready to leave. EMPLOYEE walks past him, carrying a drink, looking very exciteable.

Transcription:

News Sport Weather Shop Earth Travel Capital Culture Menu The tiny changes that can cause AI to fail Machines still have a long way to go before they learn like humans do and that s a potential danger to privacy, safety, and more. By Aviva Hope Rutkin 11 April 2017 The year is 2022. You re riding along in a self driving car on a routine trip through the city. The car comes to a stop sign it s passed a hundred times before but this time, it blows right through it. To you, the stop sign looks exactly the same as any other. But to the car, it looks like something entirely different. Minutes earlier, unbeknownst to either you or the machine, a scam artist stuck a small sticker onto the sign: unnoticeable to the human eye, inescapable to the technology.

In other words? The tiny sticker smacked on the sign is enough for the car to see the stop sign as something completely different from a stop sign. It may sound far fetched. But a growing field of research proves that artificial intelligence can be fooled in more or less the same way, seeing one thing where humans would see something else entirely. As machine learning algorithms increasingly find their way into our roads, our finances, our healthcare system, computer scientists hope to learn more about how to defend them against these adversarial attacks before someone tries to bamboozle them for real. Artificial intelligence fuels our everyday lives in increasingly inextricable ways, from self driving cars to household appliances that self activate (Credit: Getty Images) It s something that s a growing concern in the machine learning and AI community, especially because these algorithms are being used more and more, says Daniel Lowd, assistant professor of computer and information science at the University of Oregon. If spam gets through or a few emails get blocked, it s not the end of the word. On the other hand, if you re relying on the vision system in a self driving car to know where to go and not crash into anything, then the stakes are much higher. Whether or not a smart machine malfunctions, or is hacked, hinges on the very different way that machine learning algorithms 'see' the world. In this way, to a machine, a panda could look like a gibbon, or a school bus could read as an ostrich. In one experiment, researchers from France and Switzerland showed how such perturbations could cause a computer to mistake a squirrel for an grey fox, or a coffee pot for a macaw.

How can this be? Think of a child learning to recognise numbers. As they look at each one in turn, she starts to pick up on certain common characteristics: ones are tall and slender, sixes and nines contain one big loop while eights have two, and so on. Once they ve seen enough examples, they can quickly recognise new digits as fours or eights or threes even if, thanks to the font or the handwriting, it doesn t look exactly like any other four or eight or three they ve ever seen before. In this way, to a machine, a panda could look like a gibbon, or a school bus could read as an ostrich Machine learning algorithms learn to read the world through a somewhat similar process. Scientists will feed a computer with hundreds or thousands of (usually labelled) examples of whatever it is they d like the computer to detect. As the machine sifts through the data this is a number, this is not, this is a number, this is not it starts to pick up on features that give the answer away. Soon, it s able to look at a picture and declare, This is a five! with high accuracy. In this way, both human children and computers alike can learn to recognise a huge array of objects, from numbers to cats to boats to individual human faces. But, unlike a human child, the computer isn t paying attention to high level details like a cat s furry ears or the number four s distinctive angular shape. It s not considering the whole picture. Instead, it s likely looking at the individual pixels of the picture and for the fastest way to tell objects apart. If the vast majority of number ones have a black pixel in one particular spot and a couple of white pixels in another particular spot, then the machine may make a call after only checking that handful of pixels. Now, think back to the stop sign again. With an imperceptible tweak to the pixels of the image or what experts call perturbations the computer is fooled into thinking that the stop sign is something it isn t. If these vulnerabilities exist, someone will figure out how to exploit them. Someone likely already has Similar research from the Evolving Artificial Intelligence Laboratory at the University of Wyoming and Cornell University has produced a bounty of optical illusions for artificial intelligence. These psychedelic images of abstract patterns and colours look like nothing much to humans, but are rapidly

recognised by the computer as snakes or rifles. These suggest how AI can look at something and be way off base as to what the object actually is or looks like. This weakness is common across all types of machine learning algorithms. One would expect every algorithm has a chink the armour, says Yevgeniy Vorobeychik, assistant professor of computer science and computer engineering at Vanderbilt University. We live in a really complicated multidimensional world, and algorithms, by their nature, are only focused on a relatively small portion of it. Voyobeychik is very confident that, if these vulnerabilities exist, someone will figure out how to exploit them. Someone likely already has. Consider spam filters, automated programmes that weed out any dodgy looking emails. Spammers can try to scale over the wall by tweaking the spelling of words (Viagra to Vi@gra) or by appending a list of good words typically found in legitimate emails: words like, according to one algorithm, glad, me or yup. Meanwhile, spammers could try to drown out words that often pop up in illegitimate emails, like claim or mobile or won. What might this allow scammers to one day pull off? That self driving car hoodwinked by a stop sign sticker is a classic scenario that s been floated by experts in the field. Adversarial data might help slip porn past safe content filters. Others might try to boost the numbers on a cheque. Or hackers could tweak the code of malicious software just enough to slip undetected past digital security. Troublemakers can figure out how to create adversarial data if they have a copy of the machine learning algorithm they want to fool. But that s not necessary for sneaking through the algorithm s doors. They can simply brute force their attack, throwing slightly different versions of an email or image or whatever it is against the wall until one gets through. Over time, this could even be used to generate a new model entirely, one that learns what the good guys are looking for and how to produce data that fools them.

Autonomous vehicles and surgical robots put a lot on the line, so modern machines leave little room for error (Credit: Getty Images) People have been manipulating machine learning systems since they were first introduced, says Patrick McDaniel, professor of computer science and engineering at Pennsylvania State University. If people are using these techniques in the wild, we might not know it. Scammers might not be the only ones to make hay while the sun shines. Adversarial approaches could come in handy for people hoping to avoid the X ray eyes of modern technology. If you re some political dissident inside a repressive regime and you want to be able to conduct activities without being targeted, being able to avoid automated surveillance techniques based on machine learning would be a positive use, says Lowd. In one project, published in October, researchers at Carnegie Mellon University built a pair of glasses that can subtly mislead a facial recognition system making the computer confuse actress Reese Witherspoon for Russell Crowe. It sounds playful, but such technology could be handy for someone desperate to avoid censorship by those in power. McDaniel suggests we consider leaving humans in the loop when we can, providing some sort of external verification

In the meantime, what s an algorithm to do? The only way to completely avoid this is to have a perfect model that is right all the time, says Lowd. Even if we could build artificial intelligence that bested humans, the world would still contain ambiguous cases where the right answer wasn t readily apparent. Machine learning algorithms are usually scored by their accuracy. A programme that recognises chairs 99% of the time is obviously better than one that only hits the mark six times out of 10. But some experts now argue that they should also measure how well the algorithm can handle an attack: the tougher, the better. Another solution might be for experts to put the programmes through their paces. Create your own example attacks in the lab based on what you think perpetrators might do, then show them to the machine learning algorithm. This could help it become more resilient over time provided, of course, that the test attacks match the type that will be tried in the real world. McDaniel suggests we consider leaving humans in the loop when we can, providing some sort of external verification that the algorithms guesses are correct. Some intelligent assistants, like Facebook s M, have humans double check and soup up their answers; others have suggested that human checks could be useful in sensitive applications such as court judgments. Machine learning systems are a tool to do reasoning. We need to be smart and rational about what we give them and what they tell us, he says. We shouldn t treat them as perfect oracles of truth. Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter. If you liked this story, sign up for the weekly bbc.com features newsletter, called If You Only Read 6 Things This Week. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday. Share this article: