Word of the day: AI Hallucination

Recently, via this Ask Historians reddit thread, I discovered the term Hallucination in the context of Artificial Intelligence.

I’ve known about this concept for a while; neural-net machine translation will often produce “better English,” at the cost of…yanno….accuracy. But all anyone ever sees is how clear and not-clunky the English sentence is! They ooh and aah over how magical the new tech is; but the problem of “did this actually translate CORRECTLY” has not gone away with the new technology. It’s still as present as it ever was. And the pretty English outputs make us more likely to trust the imperfect tech; the old “your purple cabbage grandmother” outputs gave us an appropriate amount of distrust for the machine.

That’s a suitable analogy for the rest of AI. It can be pretty. It’s not bad. But don’t think of it as the same kind of reliable as human-produced content; it’s not even the same kind of unreliable as human-produced content. That’s the part that worries me most.


April 2023: Found this TikTok about spaghetti photos and how they’re likely shaped by the biased data (the only photos going into the engine are likely of toddlers making a cute mess). As she says, “AI is an Ask The Audience robot, and the Audience is the general Internet-using public.”

Leave a comment