It’s easy enough to use a survey to test out a website or campaign, but how do you know if the answers you get are truthful? Columnist Brian Massey helps you weed out the pretenders among your participants.
A/B testing delivers powerful data to a marketer, researcher or business executive. This data is powerful because it’s designed to deliver trustworthy behavioral data — data that can predict what makes more visitors convert.
But A/B testing can’t be used for every question you have. For any website or campaign, you’ve got a limited amount of traffic to run A/B tests on. They’re hard work to create, and if you’re not careful, they’ll give you the wrong answer.
You want to be very protective of your A/B testing time.
If you have lots of questions, and you can’t test them all, what do you do?
The answer is to find or generate some data that you can use to answer some of your questions or narrow down your design options. One way to do this is to ask some people to pretend they’re you’re customers and get their feedback.
Bring in the liars and the pretenders
Using surveys and focus groups to inform decisions is a time-honored tradition for advertisers and product developers. Traditionally, these were time-consuming and expensive to conduct. As a result, they were used infrequently by a lot organizations.
This is no longer the case. Today, we can present our design prototypes to a panel of individuals via the web and ask them questions about what they see. We have a variety of tests to choose from, each requiring that we ask the participant some questions.
The problem is that more often than not, these people are liars and pretenders. They don’t lie because they are evil. We’re just wired to lie. We don’t really know the reason that we make the choices we do, and our brains are really good at making up reasons for our motivations.
They are pretenders because we ask them to be. They’re often not in our target audience, so we give them a scenario and hope they can put themselves into the shoes of our prospects.
Ask questions that will keep you from getting lied to
Here are just some of the user surveys you can run:
The Question Test: Ask the participant to look at a design and answer questions about it.
The Click Test: Set up a scenario for the visitor, and then ask her to click on the page. You can test the location and time it took to click.
The 5-second Test: You set up a scenario. Then each participant gets to see a design for 5 seconds (that’s a long time in cognitive time). Then they answer your questions about what they saw.
The Preference Test: You present participants with two or more designs. They select one based on your scenario. Then you get to ask them questions about the designs.
These surveys are typically timed, giving a quantitative result as your participants explore your designs.
Clearly, asking good questions is crucial to all of these. We must be sure to ask questions in a way that keeps us from getting lied to.
If possible, ask your prospects or customers
A/B testing data is valuable because the people we’re measuring are actual prospects for our offering. This isn’t true with participants in something like a preference test or 5-second test. They’re strangers that we’re asking to pretend to be our prospects and customers.
You can recruit from your website, asking a prospect if they’re interested in participating in a survey. But this sort of thing may impact your conversion rates and derail a buyer.
One technique we’ve used at our company is a “Thank you page” survey. This catches qualified participants after they’ve purchased.
Our favorite question is: “What almost kept you from buying today?”
We used an exit survey to refine the feature list for one of our clients. We did an A/B test based on what we learned and significantly increased sales of their high-end product.
Help them pretend
For many user surveys, you can set up the situation by composing a scenario. Give them simple and specific scenarios to help them put themselves in the shoes of one of your customers. Remember that they are civilians.
Instead of: “Your company is considering purchasing an HR Management system.”
Say: “Your company has 500 employees and needs a system to manage hiring, paying and terminating them.”
You don’t have to sell them — just give them enough information for them to start pretending.
Ask behavioral questions, not their opinion
In our intense desire to get feedback, we often ask our participants to exceed their level of expertise. When we ask them their opinion on a design, we’re asking them to be a designer. When we ask them if our copy was helpful, we are asking them to act as a copywriter. None of them will be, but they will think they are.
So they will provide lots of input, most of which is useless. And then you get to read through it to determine what input is legit.
Instead, ask them a question that they could answer if the design is doing its job. Ask them a question to which the answer is contained in your copy:
“Where would you click if you wanted to know if the product sold is HIPPA-compliant?”
“Where would you click to find the price?”
“How much is the product?”
Look for big differences
This data is not as reliable as A/B testing data. It violates several of the rules of good behavioral data.
- It is generally a small sample size — dozens of people versus hundreds or thousands.
- The participants know they are being tested, so they will act and feel differently. They aren’t “blind.”
- They’re usually not a prospect for your product or service.
- The participants are skewed toward people who like to be asked their opinion.
As a result, you’re looking for big, obvious results to give you guidance. If you’re asking 25 people if they prefer design A or design B, and 15 choose B, that means 60 percent of your panel preferred B, and 40 percent preferred A. That may seem like a big difference, but it’s not. There is a very good chance that if you tested another 25 participants, the results would be the opposite.
If you ran another through 25 and found that 35 of them (70 percent) preferred B, you would have more confidence, both from the larger sample size and the larger difference.
User surveys are best for directional guidance. Use them to reduce your choices in preparation for a final A/B test.
Ask a qualifying question or two
You can decide which participants are more likely to give you informed answers by asking qualifying questions, such as:
“Have you ever worked for a company with an HR management system?”
“Do you own your own business?”
“Have you ever managed people?”
“Do you own a pet?”
Don’t make them too specific, or you’ll severely limit your participant list.
Blindfold your participants
When we run a preference test, we are asking the participants to make a value judgment based on their preference. This has bias written all over it. Remember, blind participants are good.
Instead of asking 50 participants their preference among two designs, show 25 one design and a different 25 the other design. Craft your questions to see which design helped them solve a problem or find an answer. You can still ask them their opinion of the page, but you’ll know which group gave the right answer and how long each took to find the answer.
If you execute these user tests correctly, you can minimize the negative impact of a small, non-blind, poorly targeted, opinionated sample of participants. These surveys can significantly reduce the chances of launching a campaign or website that doesn’t perform.
Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.
Marketing Land – Internet Marketing News, Strategies & Tips
(29)