Position
Wake Up

Andy Hall’s “Building the Truth Machine” is one of the sharpest things written about prediction markets this year. His diagnosis is correct: the political markets on Kalshi and Polymarket are ghost towns. Only 1.3% are liquid enough to trust. The platforms fragment what little liquidity exists. The resolution rules are a mess — ask anyone who bet on whether Cardi B “danced” at the Super Bowl.
Hall wants to fix this. So do we. But his blueprint assumes the financial market is the right engine for a truth machine. We think the engine itself is the problem.
Money talks. It also lies.
The prediction market thesis is elegant: people don’t lie with their money. Kalshi’s CEO says it plainly. It’s a compelling line. It’s also incomplete.
Money doesn’t eliminate bias. It introduces new ones.
Who shows up. The people trading political contracts on Polymarket are not a representative sample of anything. They skew young, male, crypto-native, and extremely online. Hall’s hypothetical New York business owner—the one trying to forecast Mamdani’s tax hike—is not on Polymarket. She’s running her business. The signal in her head never reaches the market.
Manipulation. Hall documents this himself: most political markets are so thin that moving the price five percentage points costs less than dinner at Carbone. When CNN broadcasts these prices to millions, the incentive to move the number can outweigh the incentive to inform it. That’s not a bug to be patched. It’s a structural feature of routing information through a financial instrument.
The contract itself. Prediction markets need binary questions with clean resolution dates. Will the bill pass? Yes or no. But the questions that matter most resist this format. How intensely do voters care about immigration? How will tariff fears actually change purchasing behavior? What does the electorate really prioritize versus what it tells pollsters? You can’t write a contract for any of this.
Stop asking people what they think. Watch what they do.
The deepest problem with prediction markets isn’t liquidity or fragmentation. It’s that they still rely on what people say—they just add a price tag. A trader on Kalshi is still expressing an opinion. He might have money behind it, but it’s still a stated preference filtered through his particular worldview, information diet, and incentive structure.
We collect opinions too. But we don't stop there.
Here’s what we actually do: we run massive, ongoing, live in-market tests. We put carefully designed stimuli—messages, offers, framings, provocations—in front of real people in the real market, in real time. Then we watch what they do. Not what they say they’d do. Not what they’d bet on. What they actually do when confronted with a choice.
Click or scroll past. Engage or ignore. Convert or bounce. Share or suppress. These are revealed preferences at scale—thousands of concurrent experiments generating behavioral data that no survey, no prediction market, and no AI model can hallucinate into existence.
This is the layer that makes a truth machine actually work. Surveys tell you what people claim to believe. Prediction markets tell you what a self-selected group of traders will wager. Behavioral observation tells you what’s real. We run all three—continuous surveys to generate hypotheses, live behavioral experiments to test them, and a validation layer that compares the two—but it’s the behavioral layer that settles the argument.
No thin markets. No selection bias. No binary constraints. Just cold, hard observation data at scale.
This applies to politics right now
Take Hall’s Mamdani example. A prediction market asks: will the tax hike pass? Yes or no.
That’s the least interesting question. What actually matters: How do voters feel about it? Is support deep or shallow? Does it survive the argument that companies will leave? What do people say they’ll do versus what they actually do when presented with competing framings of the policy?
We can answer this. We design stimulus that mirrors the real argument—the tax-the-rich framing, the companies-will-flee framing, the fiscal-responsibility framing—put them in front of real New Yorkers, and observe how behavior shifts. Not hypothetically. In the market. Right now.
No prediction market can touch this. The format doesn’t allow it.
The same logic applies to indices like the Michigan Consumer Sentiment Index. Published monthly. Backward-looking. Built entirely on what people said they felt about the economy four weeks ago. Not whether those feelings predicted a single real-world action. We think there’s a better way.
Reality check
Hall proposes four fixes for prediction markets: better questions, subsidized liquidity, AI traders, standardized rules. Good ideas, all of them. But they’re patches on a system that never touches the real world.
Prediction markets are a derivative—a financial abstraction layered on top of human opinion. Surveys are an abstraction too. Synthetic models, including ours, are abstractions. Consumer sentiment indices are abstractions. Derivatives of derivatives, all the way down.
Abstractions are useful. We use them every day. They speed thinking, surface patterns, sharpen hypotheses. But simulations alone are the pod. And so is any derivative market that mistakes its own internal coherence for contact with reality. Comfortable. Self-reinforcing. And untethered.
What breaks you out is behavior. Real people making real choices observed at scale, in the market, right now. That’s the signal that calibrates everything else—that tells you which models to trust and which ones have been lying to you. Economists track revealed preference. Doctors run trials. Investors watch the tape. Every serious discipline eventually demands the same thing: show me what happened in the real world.
The truth machine Hall describes is worth building. But you don’t build it by stacking better abstractions on top of old ones. You build it by insisting, stubbornly, at every layer, that the model answer to observed reality.
It’s time to wake up.