Episode 6: Understanding AI Decisions

With the advent of deep learning, AI systems have become more and more difficult to pull apart to understand why a given decision was made. For example, think about a cat classifier. If you want to figure out what it was about an image that caused a deep neural network to classify an image as a cat, then you're going to have a very difficult time tracing the thousands of pixels from the image through thousands more nodes, connections, and nonlinear functions to figure out what was critical to push this towards "cat." Most researchers see the networks that they built and trained as black boxes. But does it matter? We see that deep learning works, so does it matter that we don't understand why it works? We may not care so much for a cat classifier, but do we care if we're making medical decisions with life and death consequences? What about for business decisions with millions of dollars on the line? Do we need to know why a decision was made in these cases?

Full Episode Here

Show notes

iTunes

Twitter @artlyintelly

Facebook

artificiallyintelligent1@gmail.com

Podcast_Thumbnail.jpg

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now