Tech Policy Podcast

385: AI Snake Oil

Episode Summary

Sayash Kapoor (Princeton) discusses the incoherence of precise p(doom) predictions and the pervasiveness of AI “snake oil.” Check out his and Arvind Narayanan’s new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Topics include: - What’s a prediction, really? - p(doom): your guess is as good as anyone’s - Freakishly chaotic creatures (us, that is) - AI can’t predict the impact of AI - Gaming AI with invisible ink - Life is luck—let’s act like it - Superintelligence (us, that is) - The bitter lesson - AI danger: sweat the small stuff Links: AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (https://tinyurl.com/4v3byma9) AI Existential Risk Probabilities Are Too Unreliable to Inform Policy (https://tinyurl.com/fdrcu5s6) AI Snake Oil (Substack) (https://tinyurl.com/2chwfrka)

Episode Notes

Sayash Kapoor (Princeton) discusses the incoherence of precise p(doom) predictions and the pervasiveness of AI “snake oil.” Check out his and Arvind Narayanan’s new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Topics include:

Links:

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

AI Existential Risk Probabilities Are Too Unreliable to Inform Policy 

AI Snake Oil (Substack)