Why did it do that? The (im?)possibility of deciphering actions of AI.

Streaming Media

Infographic

Description

When we see computers performing difficult tasks, we often assume they are "thinking" about the problem much as a human would. In reality, AI systems often solve problems in ways that are foreign to human intelligence. This leads to problems, particularly when we feel that the action or decision is wrong, and yet struggle for a basis to challenge it or a way to change it in future cases. This talk will describe why it is difficult to fully understand the reasons behind an action or decision made using modern machine learning, what we can do to better understand AI, and why we should – and shouldn't – care.

Location

Stewart 279

Start Date

11-6-2018 11:30 AM

Share

COinS
 
Nov 6th, 11:30 AM

Why did it do that? The (im?)possibility of deciphering actions of AI.

Stewart 279

When we see computers performing difficult tasks, we often assume they are "thinking" about the problem much as a human would. In reality, AI systems often solve problems in ways that are foreign to human intelligence. This leads to problems, particularly when we feel that the action or decision is wrong, and yet struggle for a basis to challenge it or a way to change it in future cases. This talk will describe why it is difficult to fully understand the reasons behind an action or decision made using modern machine learning, what we can do to better understand AI, and why we should – and shouldn't – care.