‘Half my advertising is wasted, I just don’t know which half.’

Share this article

‘Half the money I spend on advertising is wasted;
the trouble is I don’t know which half.’

John Wanamaker’s famous quote from the late 1800’s has since been said by a number of other people. And, without wanting to cause controversy, it may be something a lot of people are currently saying in regards to their marketing spend.

Which half exactly?

It would appear, due to a recent report by Adweek and another by iMedia Connection, that at least half of your display advertising is wasted. According to the reports, at least half of the click-throughs are ‘not real’ and impressions can be somewhat suspect too.

There’s many reasons why they might not be real, but in another report featured on gigaom.com, Trademob claims that a staggering 40% of all mobile clicks are either accidental presses or fraud.  So the bottom line might well be that 50% of your spend could be a complete waste of time.

And if you check your numbers, you’ll probably agree. You’ll probably only see somewhere between 30%-60% of reported click-throughs from an ad-server.

A 50% difference?

In the analytics industry we all expect a little divergence between platforms. Numbers never match.

There are various counting methodologies used across the two industries (ad and analytics), the primary difference being server side vs. client-side.

Analytics counts client-side, so considerations need to be made for analytics code placement, non-JS browsers, code-blockers, slow loading pages, mobiles and fat fingers etc… But, in all honesty, I still struggle to believe that 50% of clicks experience these problems. Particularly when you have multiple counting technologies running on the page, for example, Google Analytics, SiteCatalyst, Page views, Visits, Entries, Instances, Clicks – and the two platforms agree (wow you don’t hear that often). Surely they can’t all be wrong.

We polled a number of our clients and the result was pretty much the same; ad servers report X, analytics report roughly half of X. And it didn’t really alter between clients either.

Which number do you go with?

If you go with the reported clicks, then, sure, media metrics look a little better, click-through rates are average, CPC is reportable, it’s all justifiable to your boss, and you avoid the ugly elephant sitting in the corner of the room.  But, isn’t there something wrong with this strategy?  Did the reported clicks really end up seeing your content?

On the flip side if you go with your analytics data, which more accurately represents browsers (a.k.a humans) that actually see your content, then your media metrics look worse off. You begin to wonder about the spend and the value returned, and the reported numbers, and as soon as your boss sees those numbers, well, now you’re in a bad place with a 5,000lb elephant stomping around the room.

In a CPC (cost per click) model, you’re probably paying for clicks that aren’t human. And we all know that display has some pretty low click-through rates anyway, as well as some pretty horrendous bounce rates. Add to this the 50% that probably weren’t really a human anyway, and what does that do to your numbers…

I’ll let you do the math and draw your own conclusions on that one.

So what can you do?

In terms of the actual numbers reported, unfortunately there’s not really much you can do.

One way to get a bit of a yardstick on how big the problem is for you, is to go a little old school and troll your way through your web server log files to see how many requests contain the query parameter from the ad server. You should then have at least a small insight into clicks that were made but weren’t reported (maybe as a result of the analytics code not firing, or it being a non-javascript enabled request), versus clicks that didn’t make it to your site (maybe the back button was pressed, the window closed, or even a bot request).

You can also start to leverage view-through metrics if you’ve hooked up analytics to your ad serving technology – but I’d recommend doing this within your analytics platform so that it connects at the visitor level.  You can take a look at conversion rates for those that saw an ad, versus those that didn’t see an ad.
Is the conversion percentage better or worse?  If it’s worse, what can you do to improve that?  Can you alter the message to be more brand focused vs. offer focused, and does it make a difference?  Once they get to your site, after seeing the ad elsewhere, can you reinforce the conversation? Now we’re into personalisation…

You could look at impressions vs. view through vs. assisted conversion, broken down by network or ad creative etc. This will help show which ads are performing better or are not doing well, and from there you can begin the discussion about messaging and placement improvements.

Attribution may help

Understanding how display ‘influences’ a conversion seems to be a better play than just click reporting.  Conversions typically happen across multiple touchpoints, and multiple visits, so understanding just how much is influenced by display will be more useful than trying to fight the ‘why are we 50% lower’ question. And, going back to the quality discussion, rather than pure click through metrics it places more emphasis on optimisation. Which is a much better strategy in the long run.

It shouldn’t be about the click

So if you’re really looking to change digital behaviours, you should be looking to answer 2 questions:

  1. How can you improve the quality of traffic, both to and on your site, through offsite testing and onsite optimisation?
  2. How can you leverage attribution to better understand the conversion journey, and re-engage offsite with visitors during their multi-visit, multi-touch conversion journey?

Chasing the “click” probably isn’t the right strategy.


Share this article