If the plan doesn’t work, why does measurement keep pretending it does?
Retail planning creates a sense of control that doesn’t really exist.
Targets and budget levels are set with good intent and significant knowledge, but the moment activity goes live the market starts moving. Demand moves, competitors change their position, pricing alters and AI-led campaigns respond to signals that weren’t in play when the plan was signed off. The further you get from that sign-off point, the less the plan reflects what is actually happening.
The issue isn’t that planning is wrong, it’s that too much of what happens is built on the assumption that it’s still the right plan
What tends to happen next must be familiar to everyone working in retail and performance media. Performance shifts from expectation and measurement steps in to explain what’s happening.
But measurement isn’t really being used to understand what’s happening in the market. It’s being used to defend the plan that’s been built.
That isn’t a criticism of the people doing the work, it’s a consequence of how measurement has been set up. The tools, frameworks and reporting structures most teams rely on were designed for a world where media behaved in more predictable cycles, where the inputs into the media could give a sense of increased control and that something was being done to meet the plan. That world has gone, but the way performance is measured hasn’t caught up.
So teams do what they’ve been trained to do. They interrogate the data, look for patterns and try to explain performance in a way that gives the business confidence. The problem is that the system they’re working with allows multiple interpretations of the same reality.
Marketing sees efficiency and platform performance. Finance looks for contribution to revenue and profit. Trading is focused on sales, demand, stock and pricing. Each view is valid, but they don’t always reconcile. More often than not, each of those teams can find something in the data that supports their position, which is exactly the problem. Everyone can be partially right, but no one can be decisively confident about what to do next.
So the conversation stays stuck in explaining performance, because that’s what the current approach to measurement is designed to do.
You can see the limitations of that approach in the tools themselves.
- Attribution can show how value is assigned across channels within a journey, but it doesn’t tell you what would have happened if that investment hadn’t been there.
- Correlation between metrics can suggest a relationship, but it does not prove cause and effect.
- Post-campaign analysis can describe what happened, but it arrives too late to influence the outcome.
And in certain moments, this actively leads teams in the wrong direction. Take post-Christmas into January. Most people are paid early in December, which leaves a long stretch before the next payday. What the data shows, year after year, is that performance in weeks two and three of January drops off quite sharply from an efficiency point of view. On the surface, it looks like demand has weakened, but it hasn't.
People are still browsing, still showing intent, but they are not converting in the same way because they simply don’t have the money in their pockets. Instead, demand builds quietly, and retailers see a sharp uptick as payday arrives, often stronger than expected.
If you are only looking at performance in those middle weeks, the conclusion is usually that something isn’t working. Budgets are pulled back, targets are tightened and activity becomes more cautious. It’s a rational response to what the data is showing at that moment. It’s exactly how teams have been trained to respond for years
By the time demand converts in payday week, performance looks strong again, but the significant opportunity has already been missed. Why? Because AI-led campaigns like PMax or Advantage+ are responding to those same signals. If they are constrained during those lower-efficiency weeks, they never fully see the demand that is building underneath. When payday comes, performance improves, but it rarely reflects the full potential of what was there.
Measurement is describing what is happening in the moment, but not what is building underneath it. And decisions are being made on that incomplete view.
That’s where the questions start to change.
- Is the media genuinely creating demand, or simply capturing it?
- If we increase investment here, what happens to margin, not just revenue?
- Where are we already saturated, and where is there still room to grow?
- We’re in a promo period, we just need to get spend away, why is this not working?
Those questions don’t sit neatly within a single report, and they can’t be answered by looking backwards alone. They require measurement to connect what was assumed at the planning stage, what decisions are being made while activity is live and what the business actually experiences as a result.
Without that connection, measurement naturally ends up reinforcing the plan rather than challenging it.
You see it in behaviour, because when confidence is low, decisions slow down. Investment is held back until there is more certainty, even when the signals suggest there is an opportunity. Teams spend more time explaining performance than acting on it.
This doesn’t happen because teams don’t understand performance. It happens because the tools and frameworks they’ve been given were designed for a different way of operating.
What’s missing is not more data. It’s confidence in what the data is telling the business to do, and that requires a shift in how measurement is used and how the wider business is aligned behind it.
Less focus on reporting what has happened, and more focus on building evidence that can guide decisions while activity is still live. Evidence that can be understood in the same way across marketing, trading and finance. Evidence that answers a simple but difficult question: what is media actually contributing here, and what should we do next?
In practice, that means combining different approaches rather than relying on one.
Platform signals can help guide optimisation, but they need to be tested. Experiments can provide confidence in incremental impact, but they can’t answer everything, and they are a snapshot in time. Broader models can explain how demand behaves over periods, but they need grounding in what is happening day to day. No single method gives you the answer. But used together, they start to build something far more useful than a report. They build confidence.
That is the direction we are moving in at Upp.ai.
Not as another reporting layer, and not as a different take on attribution, but as something closer to a measurement capability that helps the business make decisions with confidence while activity is still live.
The role is to connect what was assumed at the planning stage, the decisions being made as campaigns run and the outcomes the business actually sees. Without that calibration, each part of the process operates in isolation, and so does media reporting. Plans get set, campaigns optimise against platform signals, and performance is reviewed afterwards, but there is no consistent way of understanding how those pieces relate to each other.
By bringing those elements together, the focus shifts. Measurement stops being something that explains performance and starts becoming something that helps the business act on it. It gives teams a shared view of what is happening and a clearer sense of what to do next, not just within marketing but across trading, finance and the wider organisation.
This is where we see this all heading. Less reliance on fragmented reporting and more emphasis on building a measurement layer that sits closer to the core of how retail businesses operate.
It stops being a way to explain why something happened and becomes a way to decide what to do next. Once that shift happens, the plan itself starts to change; it becomes something that is tested and adjusted continuously based on what the market is actually doing rather than something that needs to be defended when reality doesn’t match it.
