Google’s recent introduction of AMP and Facebook’s Instant Articles (both of which claim to be able to deliver news articles up to 10x faster than the ordinary mobile web) are a wake-up call to newspapers and other publishers around the world.
Both of these initiatives have been introduced in response to the increasingly sluggish loading speeds of mobile news sites. Put simply, they work by removing all of the excess code, trackers, tags and beacons (plus a bit of caching magic) to foreground text and graphic news content without any of the other bloat found on publisher’s own sites.
In each case, the stripped back page is served directly by Google or by Facebook, posing a worst-case-scenario for news publishers where— if they don’t address the issue of increasingly glacial site performance—it’s conceivable that they could forfeit their right to a platform entirely.
Why are news sites getting slower?
As traditional income sources continue to fall through the floor, newspapers have had to become increasingly creative about how they generate revenue. One of the ways that they are pursuing this is through the use of sophisticated advertising technology: specifically, by using tags, trackers and beacons to harvest behavioural data collected from around the web and using this to display more targeted, more valuable advertising placements.
A wide array of ad-tech startups have sprung up to fill this demand. The downside is that the innumerable platforms and networks —all with their proprietary tags and trackers firing on each and every pageview—has a profound effect on page loading speeds.
Things aren’t getting any better, either.
This survey from July 2015 found that between the homepages of 20 major US and European publishers, some 500 different external snippets of Javascript were loaded. Nine months later, that figure has risen to almost 700.
Do these hundreds of lines of externally loaded Javascript code impact how fast a page loads? By how much? And what impact does this have on user engagement and retention?
This is what my colleagues and I set out to understand.
Quantifying the impact of page loading speed
Like many publishers around the world, The Telegraph — a 161-year-old broadsheet British newspaper — is focusing on providing excellent digital products and experiences.
In early 2014 the business established a brand new Product and UX team and, soon after, engaged the services of Easter Island Heads to help set up an in-house A/B testing program with Optimizely.
Site performance was an issue that cropped up continually as we began to define a testing backlog in meetings with stakeholders from across the business.
The way they told it, the advertising team continually asked for new tracking scripts to be added to the site to help raise CPMs in the face of ambitious and unrelenting revenue targets. In response, the engineering team would talk about ‘performance budgeting’ and other measures to try and quell the slowing of pages but found that in reality there wasn’t much they could do.
The main reason for this was metrics, or lack thereof. Beyond the general feeling that slowing the site down wasn’t such a fantastic idea, they didn’t have any tangible metrics to prove it.
The ad team, by contrast, could show that the new trackers were generating revenue; a fairly tough metric to argue with, despite the fact that almost everyone in the business felt that slowing down the site could have a profound long-term impact on user engagement and retention.
With the blessing of the Telegraph’s Head of Product, Alex Watson, developer Stephen Giles and I set out to design an A/B test to simulate the loading of external tags. We would artificially slow the site down in order to measure the impact on overall user engagement and retention to try and model out the relationship between site speed and overall revenue.
Making things worse to make things better
At this stage, The Telegraph was adding somewhere in the realm of one and five new tracking tags to the site every month.
Stephen devised a range of custom Ad Block Pro scripts to selectively remove some of these trackers to help us understand the impact that they were having on page load performance.
We then painstakingly tested page load timings over and over again and we found that average latency was somewhere between 1 and 5 seconds for each tag.
While it would be difficult to remove any existing tags on the site for the purposes of the test, we reasoned that we could try and look into the future to understand what the addition of new trackers could have on user engagement as measured by total pageviews per variant.
How we slowed down the site (on purpose)
Latency was generated for each variant in Optimizely using a Javascript method which delayed the ‘Document Ready’ function and made a call to an Amazon EC2 instance which contained additional code containing this same delay method.
By tweaking and testing these variables, we were able to come up with four consistently proportional delays across four different variants:
- Variant A: ~4 seconds
- Variant B: ~8 seconds
- Variant C: ~16 seconds
- Variant D: ~20 seconds
On slower connections, the delay was greater and on faster connections, it was less but we found that it was more or less proportionately accurate.
What we discovered
We ran the test on just a tiny fraction of traffic, yet still managed to test a few hundred thousand visitors over a two week period.
By integrating Optimizely with Adobe Analytics we were able to measure a wide range of engagement metrics (Number of Repeat Visits, Pages Per Session, etc.) and slice into the results by different user segments (subscribers vs. non-subscribers, for instance) to properly understand the wider picture.
This simple Optimizely goal which measured total pageviews per variant, however, gives a basic overview of what we found:
- Variant A: ~4 seconds delay -11.02% pageviews
- Variant B: ~8 seconds delay -17.52% pageviews
- Variant C: ~16 seconds delay -20.53% pageviews
- Variant D: ~20 seconds delay -44.19% pageviews
Predictably, the more we slowed the site down the less frequently users returned to the site and the fewer pages they viewed.
We were surprised to see that Telegraph readers tended to be fairly loyal and resilient in the face of significant page loading delays. This undoubtedly reflects the nature of the UK press but users on other sites are unlikely to be so patient: a dyed-in-the-wool Telegraph reader won’t suddenly start reading The Guardian, even if they have to wait 16 seconds longer for a page to load.
You won’t believe what happened next! (ok, you might)
Using a metric developed by the Telegraph’s internal strategy team representing the monetary value of a pageview (based on different revenue streams from advertising, affiliate partnerships and sponsored content and so on), we were able to model out the overall revenue impact of each variant.
By doing so, we could paint an accurate picture of the cost to user engagement of any new on-site changes which incur a detrimental impact on page performance.
While there’s still a long way to go, this results of this test provided key data to help frame the debate around site performance and, for now, at least, has armed the UK’s oldest broadsheet newspaper with the data it needs to best serve readers and advertisers alike.
Business & Finance Articles on Business 2 Community(44)