An engineer by training, I’ve always been attracted to problem spaces that offer a feedback loop. Got something right? Observe a metric go up. Want to know which approach your customers like better? Run an A/B test, look at the numbers, clearly see the best path forward. You can make a lot of product and business decisions this way.
This approach served me well – until that memorable day. The day I was the one responsible for acquiring new customers. A hundred years of marketing wisdom tells us that multiple exposures are required to have someone buy from a brand for the first time. And yet, there’s so little science in determining the optimal media mix to actually drive sales.
A beginner trying to grasp this strange world, I started asking questions:
- Are our billboards working to drive sales? Answers from billboard vendors include “gazillion people pass by this billboard every day!” and “we can even track their eye movements!” Wait, that’s not my question. My question is “of my yesterday’s sales, how many of them were influenced by a billboard.” Don’t know? Oh-kay…
- Are our video ads working? Consumers are watching so much TV, cord-cutting is barely happening, and conventional wisdom is that TV is perfect for driving sales. But what concrete evidence do we (a huge retailer!) have to prove that a specific sale was heavily influenced by TV? None.
- Are our display ads working? Again, conventional wisdom suggests that display is a view-based channel – that is, consumers don’t click on ads, they simply glance at them and are subconsciously influenced by them. Oh wait, isn’t it a convenient explanation for display ads actually not driving any measurable results?..
- Are our influencer marketing efforts working? Many retailers give each influencer a unique code, thus attributing some sales directly. But are those directly measurable sales giving enough credit? This approach doesn’t account for the halo effect, actual awareness that the influencers are generating! It must be 5x! 10x! Who can actually tell?..
How does a pragmatic growth leader – the one with the goal to drive usage and orders – deal with such a plethora of FUD? Every single argument above is a perfect way for someone peddling ad inventory to trick you into just blindly buying more of it.
Enter the world of marketing attribution – or, as I like to call it, snake oil sales. Attribution vendors say: we can track a consumer across offline and online scenarios!.. we can probabilistically determine how influenced a consumer was by each of these channels!.. and then, oh magic, we can tell you: which channels you should double down on!..
The siren song is so tempting, except that:
- We don’t know how many touches it took for a consumer to convert;
- We don’t know when the first touch was;
- We often don’t know where the consumer actually lives and works, whether they have a TV, or whether one of their friends bought your products;
- We can’t reliably track consumers across devices still;
- Google and Facebook remain, for all intents and purposes, walled gardens.
If this isn’t enough for you to say “no” to the temptation, and you think that each of the numbers here can be thoughtfully approximated using a probability distribution… Consider that the error multiplies as more and more assumptions are taken on. So each of these reasonable assumptions becomes a complete guessing game – even if you’re a huge business with millions of customers.
What’s a pragmatic to do, then? Operate on gut exclusively? Take media consumption data from a geographic area and extrapolate that your potential customers are just like everyone else, so you should use all those “advised” channels? Or worse, go on institutional wisdom – “TV is great,” or “YouTube is awesome” or “Everyone is winning with Instagram?”
Don’t give up. There’s a way to bring some data into this seemingly intractable problem. When you can’t do real A/B testing, quasi-experimentation to the rescue!
Imagine you have two very similar stores – one in city A, one in city B. Your task is to find out whether TV ads work. Inundate city A with TV ads. Predict what sales trajectory of store A would have been (use store B as a “control”); observe the change after TV ads are introduced; stop the TV ads; with the drop, observe any latent effects that hopefully dissipate, bringing store A back to similar trajectory to store B.
Do this several times – with multiple pairs of stores, at different times of the year. Each time you’re doing this, you’re reducing the error in your calculations. Use more than one “control” store. At the end, you’ll be able to tell if TV ads (the ones you’re testing – that specific creative, timing, channel set – not all TV ads!) are a useful tactic for your business.
“Easier said than done, Alex” – you’d note, and you’d be right. There’s a lot of opportunity for error here. “Plastering” the city with TV ads is expensive. External factors can interfere – what if your competitor does a big ad push at the same time?.. There can be interaction effects – what if you simultaneously did a big sale in both stores, and TV ads were only effective at the time of such a promotion?..
All of these concerns stand, of course. And yet this approach – we call it “sister markets” at Grubhub – is both statistically sound and quite rigorous. Give it a shot… and let me know how it works for you.