Measuring Mobile Ad Performance: Why We Need to Borrow a Metric From TV

August 25, 2015

By all accounts, mobile advertising is the wave of the future. Mobile ad spend is projected to increase 430


percent between 2013 to 2016, when it’s expected to surpass $ 100 billion worldwide, according to eMarketer. By 2019, overall ad spend will go on to surge to an estimated $ 200 billion. This is record-breaking growth, and it’s no wonder we are seeing this level of investment considering we’ve seen mobile usage recently exceed desktop.


It’s exciting to be in mobile today. You couldn’t ask for more users, more eyeballs.


Yet  there are still a few unanswered issues. Those spending big dollars on mobile, namely advertisers, publishers driving installs, and also brands and agencies, have a few key indicators they want to meet, including a reasonable level of accuracy, optimization, reach, and of course, performance. The mission is to deliver the correct content at the right moment closely matched to the intention of the person viewing the ad.


 


But whether this happens or not is an absolute mystery on mobile today. We may know someone installed an app, but they may never open that app again. We can measure clicks, but this has led to fraud and has also prevented brands from feeling confident that a “click” is the result they want from a campaign.


To loosen the bottleneck, we need to look at the yardstick we are using. We need to measure accuracy. We need to measure optimization. We need to measure reach. And we need to measure performance. The common denominator across all KPIs is how to measure both the audience and the campaign performance.


There are three predominant methods for measuring audiences on mobile today, as covered in Personagraph’s Q2 2015 Mobile Data Report, and each has its limits.


CPI: Cost per Install. This measurement is unique to mobile and was initiated for mobile publishers who doubled as advertisers. These publishers needed installs on their mobile apps, therefore, user acquisition dictated a new form of campaign measurement. It’s also the easiest (and first) way of measuring effectiveness in mobile video, although it limits the number of advertisers to only those who want to drive app installs.


CPI is also a higher risk to the publisher because they only get paid when the install occurs. As a result, the user experience is often quite bad because the user sees the ad over and over again, with the ad exchange favoring whatever means necessary to procure the install.


Weaknesses: Not every advertiser wants to drive an app install. Plus, users are gravely affected by the repetition of ads.


iStock_000047541410_Small


CPC: Cost per click came from desktop and originated in search, where the main function of the ad was to lead the user to a website or product page to initiate the conversion funnel and close the purchase. On desktop, CPC works best when tied to targeted keywords, relying on search terms to narrow relevancy and qualify who is clicking on the ad. Display ads on desktop naturally became display on mobile, although the screen size and user no longer matches the desktop actions that display was originally intended for.


Weaknesses: Clicks have never been a good measure of media outside of search. Mobile amplifies this problem with many erroneous clicks being attributed to media performance. We’ve seen a rise in fraud from bots, especially among Open real-time bidding markets and other programmatic exchanges. Plus, CPC does not address video, where the mobile market is headed, with 12.8 percent of impressions currently equaling 55 percent of revenue.


CPCV: Cost per completed view. This measurement is more favorable for video, especially when used with hybrid mediation algorithms, because it measures according to the effectiveness of the ad per completed view. Therefore, the correct content, the right moment, and the intention of the person viewing the ad has been achieved, in theory, because the video ad has been completed.


Weaknesses: Factors such as whether the ad is skippable or non-skippable and rewarded or non-rewarded play into why it was completed. CPCV is also not equipped (yet) for cross-device measurement. There have been variations on this form, such as CPMV (cost per 1,000 views), however, this does not take into account if the video ad was completed or not.


Given the shortcomings of these common measurement methods, it’s no wonder we are seeing a push towards new forms of measurement. Interestingly enough, the most recent form of measurement to emerge for mobile isn’t new at all. It comes from an environment where advertisers have been consistently measuring audiences for quite awhile now: television.


GRP: Gross Rating Point is calculated by the percent of the target market reached multiplied by exposure frequency. There are innate benefits to using the GRP measurement: First, having originated from television, the strength of the GRP is in measuring elusive eyeballs on video-produced ads – comparing mobile video to television, you can see why CPI is not conducive (there’s nothing to install). Second, advertisers are comfortable with this measurement system. It makes sense to invite the majority share of ad spend (which is television, at 42 percent, in the United States) to the mobile conversation by speaking in familiar language as to how ads are measured. Last – but definitely not least – the gross rating point is ideal for cross-device measurement because it takes into account exposure frequency. This last point may be the clincher for why the aforementioned three metrics will lose effectiveness over the next few years.


Weaknesses: Because the GRP is a navigational metric, it’s a measurement of how you approach your audience and how the budget is spent rather than providing an analysis of whether your ad was viewed and what action (if any) was taken.


The next chapter in measurement will be driven by people-based metrics and behaviors. Who is watching these ads is what advertisers need to know; completion rate – including CPCV – is not enough information to determine performance. Meanwhile, installs are singular in purpose, excluding most brands, and clicks are troublesome at best. GRP may or may not be the correct answer; however, it is a move in the right direction for mobile video ads. Today, most targeting and optimization is at the app level, not the people level, and this has resulted in inefficient media spend.


Click here to download our Q2 2015 Mobile Data Report.

Digital & Social Articles on Business 2 Community

(184)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.