Google Just Made MMM Free. That Is Not the Hard Part.

Updated on February 26, 2026
In this blog
Two-page spread showcasing Keen's "The Marketing Mix Modeling Playbook."

Featured resource

The Marketing Mix Modeling Playbook

Share this blog post:

Google releasing a no-code Scenario Planning interface for Meridian validates something Keen has been saying for 15 years: forward-looking decisions matter more than historical report cards. The most important question in marketing is not “what happened.” It is “what should we do next.” The fact that Google is investing in planning on top of measurement is the right instinct.

But actionability requires trust in the model. And trust in the model requires good priors.That is where this gets hard. Anyone can build a model. The question is how you know the model has been built right. For you. And right now, the path Google prescribes to get there introduces more friction than it removes.

This Is a Validation of Planning, Not Just Modeling

The Adweek piece frames this launch around making Meridian’s MMM insights more accessible. That is true, but the bigger signal is subtler. Google’s Harikesh Nair described the tool as transforming the conversation from looking back at what happened to planning for what is next. That framing matters more than the model underneath it.

For years, the industry has treated measurement as the end product. Build a model. Produce a report. Present the findings. Debate the findings. Repeat. What Google is signaling with Scenario Planning is that measurement without a path to action is incomplete. And they are right.

Keen has operated on this principle since the beginning. Measurement is an input, not an output. The output is a decision: what to invest in, when, at what level, and with what expected return. That is the shift the industry needs. The question is whether the infrastructure behind this tool can actually deliver it.

On the Bayesian Point: Yes, We Agree

Separate from the planning question, the fact that Meridian uses Bayesian inference is a validation of the modeling approach Keen has advocated for over 15 years. Meridian and its predecessor LightweightMS both use Bayesian frameworks. The industry is converging on this methodology because it allows you to blend prior knowledge with observed data, quantify uncertainty, and update as new information comes in. Keen runs a multiplicative Bayesian econometric model across hundreds of brands. We believe this is the right foundation.

But the model type alone does not solve the problem. Bayesian models are only as good as the data they are trained on. That training data is what calibrates the model for any given specific brand. Without robust training data, a model struggles with collinearity, which is how it parses out the performance between channels that are often flighted together. Get that wrong, and every recommendation downstream is compromised.

The Actionability Problem: Priors and the Data Gap

Here is where the promise of actionability runs into the reality of the data problem. Every Bayesian model needs priors. These are the starting assumptions about how each marketing channel performs before your brand-specific data starts updating the picture. The quality of those priors determines whether the model produces something useful or something dangerously misleading.

Meridian’s own documentation is transparent about this challenge. It states that the ROl measured by an experiment never aligns perfectly with the ROl measured by MMM, because experiments are always related to specific conditions such as the time window, geographic regions, and campaign settings. Translating experiment results to an MMM prior, Google notes, involves an additional layer of uncertainty beyond only the experiment’s standard error. That is a candid and important admission.

So Google’s prescribed path to better priors is to run experiments. Lots of them. But experiments are costly, time-consuming, and complex. They require holdout regions, clean test designs, and enough statistical power to produce reliable results. Even if your data was ready, which is already a major roadblock, the need for ongoing experimentation introduces another layer of delay before you get to a trustworthy output. We are not introducing speed to value. We are adding a new bottleneck.

And here is the part that does not get discussed enough: we are finding that many brands running experiments are running bad experiments. Poorly designed holdout tests. Confounded results. Time windows that do not align with modeling periods. Those flawed results get fed into priors, and now the model is confidently wrong. You have not reduced uncertainty. You have laundered bad data into a framework that looks rigorous. Garbage in, garbage out, wrapped in Bayesian math.

The Data Problem Is Not Going Away Overnight

The IAB’s State of Data 2026 report, released just weeks ago, surveyed over 400 senior decision-makers and found that 60% to 75% of buy-side users say their current measurement approaches fall short on rigor, timeliness, trust, and efficiency. Not a single respondent said their MMM covers all paid media channels. The report explicitly noted that teams waste time stitching together fragmented data instead of generating insights. 77% of marketers concede that gaming is underrepresented in their models. About half say commerce media and the creator economy are overlooked. 41% believe CTV is getting missed. 

This is the environment into which Google is launching a self-serve planning tool. The data infrastructure underneath most brands is not ready. The taxonomy is inconsistent. The naming conventions across platforms do not match. Retail media exports are mislabeled. Trade spend sits in formats no model can ingest.

Good priors are what create value out of poor data. They are the bridge between where brands are today and where they need to be. If you do not have good priors, the data problem does not just persist. It compounds. Every channel interaction the model cannot properly resolve, every flighting pattern it cannot untangle, every collinearity problem it cannot parse, flows directly into flawed planning recommendations. A scenario planner built on bad priors is not actionable. It is a random number generator with a nice interface.

Why Priors at Scale Change the Game

This is where 15 years of operating as an MMM platform creates a structural advantage. Keen’s Marketing Elasticity Engine is built on $45 billion in reconciled media activations across bundreds brands. Those are not generic industry benchmarks. They are purpose-built, living priors derived from actual brand performance data, continuously updated as new activations flow through the platform.

What that means in practice: when a brand comes to Keen and has never spent a dollar on TikTok, or is launching a new product with zero historical data, or wants to understand how retail media interacts with linear TV, Keen can provide a calibrated starting point on day one. No experiments required to get started. No six-month waiting period. The brand’s own data then updates those priors over time, creating a model that gets more precise with every planning cycle.

We are forecasting brand outcomes within four percent margin of error on a rolling 52-week basis without the need for complex experimentation. That is the difference between a model that is theoretically available and one that is practically useful. One gives you a framework. The other gives you a forecast you can defend in a finance meeting.

Planning Without Accountability Is Still Just a Sandbox

We agree with the move toward actionability. Wholeheartedly. But scenario planning without a managed service layer, without a purpose-built interface that non-technical marketers can operate in weekly decision cycles, and without a closed loop that reconciles plan versus actual, is still just a sandbox. It is a place to explore, not a system that runs a business.

Keen’s platform produces weekly buying plans, forecasts probability to goal, and reconciles what was planned against what actually happened. When there is variance, Keen pinpoints whether it came from execution differences, environmental shifts, or assumption errors. That accountability loop is what turns measurement into an operating system.

Our 2026 Benchmarks Report, covering $42 billion in media spend across 400 brands, found that overall flighted marginal ROl sits below $1, but when spend is optimized through proper planning and timing, that number jumps to $5.84. Linear TV showed the most dramatic improvement, moving from $1.26 flighted to $6.64 optimized. Those gains do not come from better models. They come from better decisions enabled by a system that connects measurement to action.

The Model Was Never the Bottleneck. The Decision Was.

Nearly half of U.S. marketers plan to increase MMM investment over the next year, according to EMARKETER and TransUnion. The demand is real. But 75% of buy-side leaders still say their measurement approaches underperform, according to the IAB. That is not a model availability problem. It is a data quality problem, a prior quality problem, and a decision science problem.

Google making Meridian more accessible is genuinely positive for the ecosystem. It raises awareness. It signals that forward-looking planning matters more than backward-looking scorecards. We have been saying that for 15 years, and it is encouraging to see the largest player in digital advertising agree.

What it does not do is solve the hard problems: getting data into shape, providing priors that actually reflect a brand’s reality rather than biased assumptions, building a user experience that works in weekly operating rhythms, and creating the closed loop that turns measurement into better outcomes.

Anyone can build a model. The question is whether that model can help you make the next decision. That is the work.

Ready to transform your marketing strategy?