Position

Generative R&D and the "Say–Do Gap"

9 min read
Experimentation pathways illustrating the say–do gap.
Generative R&D uses live experiments to measure where stated intent breaks from real behavior. Image generated by ChatGPT.

Market research has always struggled with a basic problem: it measures what people say, not what they do.

For years, that problem was tolerable. Surveys were slow, expensive, and imperfect, but they were often the only practical way to gather signal at scale. The industry learned to compensate. It used large sample sizes, weighting schemes, calibration methods, and increasingly fine segmentation to bridge the gap between stated preferences and actual behavior.

The system works well enough if you do not look too closely. The gap remained.

Most recent "innovation" continues to ignore this structural problem. End-to-end platforms streamline execution. AI accelerates analysis. Synthetic personas repackage survey responses as behavioral models. The work moves faster and looks more sophisticated, but it starts with the same flawed input: what people say they will do.

We have built a faster engine when the real problem is we are headed in the wrong direction.

Adding Behavior to the Model

We built Generative R&D to introduce observed behavior into the research process.

Generative R&D runs live market experiments. It does not replace surveys, conjoint analysis, or qualitative research. It tests whether their findings hold up when people face real choices. By default, our platform uses Generative R&D as part of a unified Bayesian workflow to experimentally validate priors — allowing you to reason probabilistically about hard-to-measure questions.

Teams use Generative R&D alone when behavior is the primary question. More often, they use it to validate existing work before conclusions solidify.

How Generative R&D Works

1. Start With a Real Tradeoff

Each study begins with a decision that already appears in conventional research:

  • Will customers accept a price premium for sustainability?
  • Do patients choose speed or provider quality when booking care?
  • Will buyers prefer automation or configurability in enterprise software?
  • Does delivery time outweigh price for repeat e-commerce customers?

Generative R&D tests these questions with real behavior, at scale, using real people rather than professional survey panelists.

2. Translate Assumptions Into a Test Matrix

Each decision breaks into explicit attributes: price, speed, quality, durability, convenience. These dimensions form a test matrix of competing value propositions.

This mirrors the logic of conjoint analysis, but the test runs in the market itself.

3. Run Guided, In-Market Funnels

Generative R&D generates digital ads and landing pages for different permutations of the matrix.

The ads are not deceptive. They do not promote fake products. They guide people through decisions they are already considering. The guide is AI-generated, but the information is real.

A mountain bike study, for example, might lead to a page organized around:

  • Ultra-light mountain bikes
  • Affordable mountain bikes
  • Durable mountain bikes
  • Compact mountain bikes

Each section isolates a value dimension that would otherwise be bundled into a survey question.

4. Observe What People Do

The system measures which messages attract attention, which paths users follow, and where interest drops off.

The landing pages resemble surveys in structure but not in experience. Respondents navigate options rather than answer questions. This reduces observation bias and post-rationalization while preserving analytical clarity.

Scale, Access, and Speed

Because Generative R&D runs as digital ads, it inherits all their targeting capabilities.

Experiments deploy globally, down to a country, city, county, ZIP code, or a latitude–longitude point with a defined radius. The system works anywhere there is demand and attention.

This approach reaches audiences that surveys cannot: high-income earners and time-constrained professionals who skip panels but make real decisions online daily.

Generative R&D is faster and often cheaper than traditional surveys. More importantly, the data is fundamentally different. It is behavioral, not self-reported. It comes from people acting in context, not panelists gaming completion incentives.

How It Fits Into Existing Work

Generative R&D slots cleanly into existing research workflows.

Some teams use it early to shape hypotheses before running large surveys. Others use it late to validate whether stated preferences hold in practice. Many run it in parallel to catch where narrative and behavior split.

It is a market probe, a validation layer, or both.

Making the Gap Measurable

Market research will keep relying on stated preference. Those tools work and everyone knows how to use them.

What has been missing is a clean way to see where preferences break down.

Generative R&D provides that. It shows which attributes pull attention, which tradeoffs people accept, and which claims collapse when cost or effort enters the frame.

The gap between what people say and what they do has always existed. Now it is measurable.

Share

What are you waiting for?