Now playing: ‘The Email Test That Goes Wrong’

From little things, big mistakes grow: How to ensure your email testing avoids the small errors that can make the results useless.

Now playing: ‘The Email Test That Goes Wrong’

Email and testing go together like peanut butter and chocolate. What is your email audience anyway but a microcosm of your customer base? Email tests give insights into your customers, which messages resonate with them, what motivates them to buy and even what turns them off. 

The catch (and there’s always a catch!) is you must set up your tests correctly. When you do, you can be 95% certain, or more, that a winning test version is not something that happened by chance or by factors not accounted for in your test hypothesis. In other words, it’s statistically significant. 

You can even apply those insights across channels or throughout your company, so everybody understands your customers better. After all, these are your actual opted-in customers, not just randos who visit your website and bounce away again.

But if you don’t follow good testing protocols, your test results can lead you far off the track, endangering not just a single email campaign but your entire email program.

My MarTech column 7 common problems that derail A/B/n email testing success lists seven ways testing programs can go off the rails and how to keep that from happening. But I now realize that I didn’t account for another major problem, so I’ll amend it here:

8. Not following the hypothesis to the letter 

Testing without a hypothesis is my No. 1 error that could affect email testing success. This new problem is a different take on that error. You could follow my testing instructions to the letter to build a hypothesis, but if you don’t set up the tests to align with it, you can’t be sure the results will be authentic.

Why does it matter? The hypothesis sets the parameters for your test. Among other functions, it predicts what is likely to happen when you introduce a change in the variant message, and it specifies what that change (or those changes) will be. (You’ll find out in a minute why I emphasized that last phrase.) It 

If you don’t set up every part of your test according to the terms in your hypothesis, you can end up optimizing your campaigns on bogus results. 

Not following the hypothesis, take 1

Testing is an essential aspect of the work I do for my clients. Nothing gets changed, no campaign goes out until it is tested and, if possible, retested to be sure we can replicate the original results. Again, we’re trying to eliminate random elements to gain more reliable results.

Despite taking all the usual precautions, I ended up discarding a recent test because I discovered belatedly that a member of my client’s marketing team made a subtle change in the control email (the original email the company sent to customers). 

We hypothesized that combining a call-to-action button in a contrasting color with a second CTA button at the bottom of the promotional content would generate more clicks than the current combination — a button in the prevailing color and a text link, also in the prevailing color, at the bottom of the message. Our goal was to increase the number of people clicking the button to view a product.

The two buttons were to be identical in every other aspect, including size, button copy, and placement in the email message. The only difference would have been the color.

 

The results showed no significant difference between the two versions. We considered the hypothesis unproven and recommended not changing designs.

I was gobsmacked at the results, and so I compared both emails and discovered someone had failed to add a chevron, or caret, pointing toward the right, on the variant. This is a subtle but visible directional cue that can nudge the reader to click on the control. The variant did not use this device.

Now playing: ‘The Email Test That Goes Wrong’

It might seem like a small oversight. But, it introduced a random factor that could have prompted more people to click. 

Not following the hypothesis, take 2

Another test involved a message with multiple elements. This is part of my Holistic Testing Methodology, in which each element of a message — subject line, image and alt text, CTA, body copy — aligns with a message theme. This control message is tested as a unit against a variant whose elements align with a contrasting theme to determine which message is more effective. However, for this test, the control and variant messages used the same subject line.

We used this hypothesis: “We believe the email body copy that incorporates at least one element specifically tailored to appeal to each of the four personality types—spontaneous, methodical, humanistic, and competitive—will generate more web visits compared to the control version of the email, which lacks this personalized approach. 

“This increase in web visits is anticipated because the personalized content is likely to resonate more effectively with a broader range of recipients, engaging their specific interests and motivations, thereby enhancing the likelihood of them visiting the website and converting.”

Once again, however, in reviewing the control (general copy) and variant (copy targeted to shopper personalities), I discovered several errors that introduced randomness into the comparisons and invalidated the test. Here are a few:

1. The header was supposed to be personalized with contextualization added as per the control. However, the variant only offered personalization with no context.

2. The variant is missing a key line of content intended to increase the product’s appeal.

3. The alt text describing the email’s hero image was different in Version B the variant, and did not support the hypothesis. 

These differences figured in this campaign’s success. We can see that because the variant’s open rate was 3% lower than the control open rate even though the subject lines were the same. These three factors were all visible within the top area of the email. They were key to motivating the recipient either to open or not.

3 takeaways for more accurate testing

1. Always compare apples to apples: As I noted in each example, all of these errors introduced random factors and resulted from not following the test hypothesis. This violates one of the golden rules of reliable testing in any endeavor, whether it’s email marketing, medical research or rocket science: Each version of the message must be identical except for the elements being tested.

Where do you find those elements specified? In the hypothesis.

2. Every element matters: Alt text often gets overlooked, but it can influence the decision to click or not. For this client, alt text could make or break a recipient’s interest in clicking to view the product being promoted. That’s because this company sends cold emails to prospects (in compliance with CAN-SPAM), many of whom block images in emails from senders not in their contact lists. 

Hence, they would not see a highly persuasive element — the personalized mock-up of a product with their names on it. The hypothesis did not specify differences in alt text between the two versions, and the variant version did not support the hypothesis, as it had omitted or changed elements that were in the control.

3. Check everything before you approve the test: Although I designed the test, the team wanted to manage the process, including email creation and deployment. I did not see the final versions until after the test was over. 

The team likely did not understand how the differences between the control and variant versions could invalidate their tests. (They do now!) Always do a strict QA check with images on and images off to ensure that all elements support the hypothesis. This was a lesson for me, too, to request proofs of all emails before they go out, even if the team wants to manage the process. 

Good tests are worth the effort

These look like monumental failures, but they are easy to avoid. The payoff is that by continuing to test, it will get easier, you’ll make fewer rookie mistakes like these, and you’ll end up with data that could be your key to winning your executives over to email. 

Your mantra should be, “Follow the hypothesis!” Write it on a sticky note and post it on your computer. Add it to your computer screensaver or your phone’s lock screen. Put it anywhere to remind you. Just remember: Keep on testing!

 

The post Now playing: ‘The Email Test That Goes Wrong’ appeared first on MarTech.

MarTech

About the author

Kath Pay

Contributor

Kath Pay is CEO at Holistic Email Marketing and the author of the award-winning Amazon #1 best-seller “Holistic Email Marketing: A practical philosophy to revolutionise your business and delight your customers.”

(1)