Position

Keep It Hyperreal

6 min read
Ron Mueck, Mask II, 2001–02. A hyperrealistic sculpture of a sleeping human face, vastly oversized, resting on a gallery plinth.
Ron Mueck, Mask II, 2001–02. Photo: Jack1956 via Wikimedia Commons

And the signifieds butt heads

With the signifiers

And we all fall down slack-jawed

To marvel at words!

When across the sky-sheet the

Impossible birds, in a steady,

Illiterate movement homewards.

—Joanna Newsom, “This Side of the Blue”

In his 1981 book Simulacra and Simulation, Jean Baudrillard claimed that the map no longer follows the territory. Instead, the map produces it. (George Soros called the same mechanism “reflexivity,” and made a few billion dollars off it.)

A Large Language Model starts with every map ever drawn and produces new maps from the patterns. The output is fluent but unanchored. Baudrillard described this condition forty years ago, but he could not have anticipated how AI would accelerate it.

AI-generated content is increasingly mixed into the training data for the next generation of models, which produce content that feeds the generation after that. Researchers have shown that this recursive loop degrades output quality over time—rare patterns vanish, variance narrows, and the models converge on a flattened approximation of their original training data. They call this model collapse. Baudrillard might have recognized something familiar: a simulation that no longer needs reality as a referent, sustaining itself on its own outputs.

Baudrillard described four stages in the life of a representation: first it reflects reality, then distorts it, then masks the absence of it, then has nothing to do with it at all. He called that fourth stage the hyperreal. The hyperreal is neither true nor false, because there is no underlying truth to compare it to.

A model trained on the output of models is a machine for manufacturing the hyperreal. Each generation looks plausible, but is less tethered to anything that happened. Given enough cycles, the corpus refers only to itself.

Ask someone what they believe; the question doesn't access a belief already sitting there—it creates one. Anyone who has objected That's not quite what I meant! knows this.

Before AI, it was easy to live with this slippage because the feedback loop was short. Say something wrong and someone who was actually there told you. Now the loop is open-ended.

Maps of maps of maps. You can generate a thousand slides about public sentiment without a single person in the loop. We must build the constraint back in. We must make it easy both to ask real people what they think and to watch what they actually do.

The risk is that we stop touching base reality because the maps have gotten so cheap and so fluent that they feel like enough.

Do you see the birds?

Share

What are you waiting for?