Metadata#
- Author(s): Emily Bender, Alex Hanna
- Number of pages: 288
- Year published: 2025
- Year read: 2025
Review#
As Weapons of Math Destruction was to the 2010s and basic machine learning, so this book is to the 2020s and “AI”.
So: I loved this book. It set my head straight on a lot of things I intuitively knew, as someone with enough expertise to understand how the “AI” sausages are being made, but was struggling to articulate. And hype is strong! We are social apes! Like - yes - natural language processing (NLP) is amazing and gratifying and weird. (Text is such gnarly data!) But NLP has been eerily amazing for a long while now (I have felt the icy showers for at least 10 years! tf-idf?! whaaat!?), and these stochastic parrots are indeed eerily amazing as well. But! Do they deserve all this hype? Where by “hype”, the authors specifically mean: enormous venture capital investments, enormous carbon emissions and re-jiggering of our energy infrastructure, and - perhaps worst of all? - figures of authority waving their arms around a vaguely-defined but definitely civilization-altering “Artificial General Intelligence (AGI)” that is always just beyond the horizon?
The authors argue - aggressively, spicily, wonderfully - that NO! This is all bananas! And I am 100% here for it.
First, they give one of the best plain English primers on neural networks, n-grams, and embeddings. It’s one chapter, and it’s only semi-technical (intended for non-technical audiences), but it covers, imo, the main ideas in a very clear, comprehensive way. So bravi, there! They also offer refreshing clarity on defining “AI” - a term that is, currently, being abused in everyday conversation, but that normally captures distinct fields in machine learning/comp sci: large language models (LLMs), OCR, computer vision, blah blah, I am tired of linking.
Rather than prognosticating about the future (and, indeed, notice how much AI hype is about the very near future… it’s just over the horizon, people!), they instead trace the history of AI (leveling some shots at Minsky and Hinton, wowza), the history of Luddites, and the CURRENT practices of how LLMs are trained, how they are used RIGHT NOW, and how they are talked about. There is a lot about labor (outsourced content moderation is horrible indeed; your boss being sold AI to “boost productivity” == aka, layoffs) and training data bias (duh) and basically plugging the holes in our social safety net with word-prediction machines. All of this was stuff I knew, but they structured it in a clear and helpful way.
The one thing I did NOT know, but blew my mind, was the theory of mind stuff and linguistics (Emily Bender is a linguistics prof at U of Washington). Basically, language includes a lot of “guessing what the other person is thinking/trying to say”. That’s why you can’t teach your baby Italian via TV (believe me, I’ve tried). It’s the interaction that matters. The social learning. Because LLMs are so good at sounding human, our brains naturally start to “fill in the blanks” about what they’re “thinking/trying to say”. This is also why people DO NOT ascribe cognition to “AI artists” when they look at those (frankly very tacky) DALL-E, Midjourney, genAI art outputs. No one thinks an “AI” was trying to “express its consciousness” - we see it as an obviously computer-generated, automated mish-mash of training inputs. But LANGUAGE. Our ape brains get real weird there. Hence all the flailing around “omg AGIIIII”.
Anyway, I loved this so much. Should be required reading for everyone in tech.