Incrementality testing in marketing: What you need to know

Updated on April 13, 2025
Screenshot of media buying investment plan inside keen platform with image of marketer on their computer.
In this blog

Share

Two-page spread showcasing Keen's "The Marketing Mix Modeling Playbook."

Featured resource

The Marketing Mix Modeling Playbook

Listen to this article

Incrementality testing in marketing was said to be the gold standard for measuring true impact. But does that still hold up in 2025? 

The question is especially relevant because the digital reality is continuously shifting. Consumer journeys are fragmented, third-party cookies are disappearing, and walled gardens limit visibility. 

Can brands still rely on slow, expensive, and often flawed incrementality tests to make critical budget decisions? And if incrementality testing is no longer the best solution, then what is? Let’s find out.

Key highlights:

  • Incrementality testing in marketing is a slow and long-drawn process of understanding the impact of your marketing efforts.
  • The drawbacks of incremental tests outweigh the benefits for new-age marketers.
  • Marketers need to implement cross-channel media analysis with an MMM platform like Keen to understand the true incremental lift of their efforts.

What is incrementality testing?

Incrementality testing is a method for measuring whether your marketing actually drives results. It works by implementing A/B tests, splitting your audience into two groups:

  • Test group: Exposed to your ads or marketing efforts.
  • Control group: Not exposed to measure the “natural” conversion rate.

By comparing the conversion rates between the two, you can estimate the incremental volume—the sales or engagement directly caused by marketing.

Read more: What is incrementality in marketing?

How does incrementality testing benefit marketers?

At a high level, incrementality in marketing helps understand whether your campaigns are truly driving results. Unlike traditional attribution models—which simply assign credit to touchpoints—incrementality tests attempt to answer a more fundamental question:

Would this conversion have happened without my marketing campaign?

If the answer is no, then your campaign created incremental lift, meaning it had a direct impact on revenue, sign-ups, or other business metrics like incremental ROAS.

But here’s the catch: Running these tests properly is complex, expensive, and slow. Even when done right, the results aren’t always reliable. A research paper published in Cornell University’s journal shows that incrementality estimates can be wildly inaccurate.

Read more: How to measure your incremental media 

Different ways to run marketing incrementality tests 

Marketers have different ways to measure incrementality, each with its own methodology and best use cases. The four most common approaches are:

1. Geo holdout testing

Geo holdout tests split geographic regions into test and control groups. Ads are shown in one location while another similar market is kept ad-free. By comparing performance, marketers attempt to measure the campaign’s true impact. 

Commonly used for: Regional advertising and TV campaigns

2. Audience-based testing (platform-run tests)

Many ad platforms, such as Facebook and Google, allow marketers to create exposed and control groups within their ecosystems. A portion of the audience is served ads, while another similar group is withheld from seeing them. The results are only applicable within the platform running the test.

Commonly used for: Digital and retail media measurement

3. Time-based holdout testing

Instead of splitting an audience, the time-based holdout approach pauses campaigns for a set period and then restarts them to measure the difference in conversion rates. 

Commonly used for: Evaluating flighting advertising schedules for paid search, social, and display ads

4. Synthetic control methods (machine learning models)

This advanced approach creates a virtual control group by predicting what performance would have been without the incremental spend. By comparing actual results against these projections, marketers estimate the added lift. 

Commonly used for: Areas where holdouts aren’t feasible, such as for national campaigns or high-spend media channels

Drawbacks of incrementality modeling

While incrementality testing offers a more scientific approach to data-driven marketing than traditional attribution models, it’s far from perfect. In theory, it helps prove causality. But in reality, these tests often introduce their own biases, operational challenges, and limitations.

As Jesse Math, VP of Strategic Partnerships at Keen, says: 

“There is a difference between incrementality and causality. Just because you saw a lift in the test group doesn’t mean it was actually the media that drove that lift.”

The problem is that marketers often confuse incrementality with causality. They take incremental tests at face value without evaluating the drawbacks like:

  1. Bias in platform-based tests
  2. Influence of external factors
  3. Disruption in marketing execution
  4. Disregard for long-term brand growth metrics
  5. Impracticality of slow and expensive tests

Here’s how these problems impact your marketing efforts:

1. Platform bias skews incremental lift analysis

Many marketers rely on platform-based incrementality tests (for example, Facebook’s Conversion Lift and Google’s Brand Lift) because they’re easy to set up. However, these tests often produce inflated and misleading results due to built-in biases in audience selection. 

In fact, an academic paper by Braun and Schwartz proves that platforms like Google and Meta don’t create truly randomized test and control groups. Instead, they assign:

  • High-intent users (who are most likely to convert) to the exposed group.
  • Low-intent users (who were less likely to convert anyway) to the control group.

This artificially increases incremental lift and makes ads seem more effective than they actually are. Here’s a figure from the academic paper that shows the bias—every colored dot represents a user, and the circle captures the test group for an ad:

Diagram showing skewed platform incrementality testing

Bottom line: Platform-run tests often serve the platform’s interests rather than providing objective results. Marketers relying on these tests may end up overinvesting in channels that aren’t actually driving true incremental impact.

2. External factors impact incrementality tests

Geo-based incremental testing assumes that comparable geographic regions will behave similarly, but in reality, markets are never identical with:

  • Economic differences: One region might experience a local recession or an industry boom that skews test results.
  • Competitor actions: A competitor launching a major sale in the control region could distort the findings.
  • Cultural differences: Consumer behavior varies by location (say, urban vs. rural shopping patterns).
  • External disruptions: Weather, local events, sports victories, or political shifts can all influence consumer buying behavior independently of your marketing.

But these uncontrollable factors don’t just impact geo-based tests—they also introduce bias in statistical models used for incrementality analysis. A research study found that traditional observational methods inflated conversion lift estimates by nearly 6x, making campaigns seem far more effective than they actually were.

Bottom line: You can’t fully control external factors, which leads to flawed incrementality models. As a result, you risk misallocating budgets and overestimating the true impact of your campaigns.

3. Incrementality measurement disrupts marketing execution

Most incrementality tests require pausing campaigns, restricting audiences, or withholding ads in key markets—all of which interrupt your marketing mix strategy and may hurt business performance.

What pauses during the testing phase look like:

  • Geo-based tests require pausing spend in certain markets, which may cause short-term revenue losses.
  • Time-based holdouts (pausing campaigns for weeks/months) can slow momentum and hurt long-term brand growth.
  • Audience holdouts limit reach—by not showing ads to a portion of your target audience, you might miss valuable conversions.

Bottom line: Most marketers can’t afford to pause ads or markets for weeks or months. Incrementality testing often creates more problems than insights.

4. Incrementality analysis focuses on short-term impact, not long-term brand growth

Incrementality tests measure whether an ad drives immediate conversions, but they don’t account for long-term marketing effects like:

  • Brand awareness ROI: If your ads increase searches for your brand, how do you measure that impact?
  • Word-of-mouth and referrals: If someone sees your ad but purchases later due to a friend’s recommendation, does the test count it?
  • Lifetime value (LTV): Incrementality tests rarely measure repeat purchases or long-term customer engagement.

Bottom line: Incrementality testing is transactional—it focuses on immediate lift, ignoring brand equity building effects that impact future conversions. If a brand shifts budget away from awareness-driving campaigns, it may harm long-term revenue growth.

5. Slow and expensive tests are impractical at scale

Incrementality tests take weeks or months to execute and require significant investment to gather enough data. For example:

  • Running geo tests requires maintaining ad spend in multiple regions (often with zero immediate ROI).
  • Tests often take 4-12 weeks to produce statistically significant results, delaying marketing decisions.
  • For unified marketing measurement, you’d have to run dozens of tests per year, which isn’t feasible for most businesses.

Solve incrementality marketing measurement challenges with Keen

Incrementality testing in marketing was designed for a different era when digital advertising was less complex. But the challenges look completely different today. So, you need a privacy-first, cross-channel, real-time, and scalable marketing measurement solution

Keen’s MMM platform delivers exactly this. With results based on $8 billion in media spend data, Keen helps you:

  • Measure impact without pausing campaigns with our patent-pending marketing elasticity engine (MEE)
  • See any halo effect
  • Get accurate, independent insights with no platform bias
  • Adapt to market changes instantly with continuously updated models 

But what if your business is currently heavily reliant on incremental tests? 

With our partnerships, we can support your testing needs. And since we are an interoperable system, you can override our priors based on the test’s findings.  

Request a demo to see how Keen can help your incrementality measurement.

Ready to transform your marketing strategy?