AI-powered surveys: Hyped or helpful?

Some wonder if AI-powered surveys are over-hyped vaporware that legitimizes bad science. Here’s how they really work.

For marketers with an interest in research, it’s a good time to start talking about AI-facilitated online surveys. What are those, exactly? They’re surveys that use machine learning to engage with respondents (think of a chatbot) which then manage a lot of the back-end data involved with implementing and reporting (think of pure drudgery). We have had a great experience with GroupSolver, and other examples include Acebot, Wizer, Attuned (specific to HR) and Worthix. There are others out there, and probably even more by the time you read this.

The good news is that a conversation about AI-facilitated online surveys is well underway. The bad news is that it’s rife with exaggerated claims. By distinguishing between hype and genuine promise, it’s possible to set some realistic expectations and tap into the technology’s benefits without overinvesting your time and research dollars in false promises (which seems to happen a lot with AI).

As Dr. Melanie Mitchell puts it in “Artificial Intelligence Hits the Barrier of Meaning,” AI is outstanding at doing what it is told, but not at uncovering human meaning. If that’s true, what possible use can AI have for online surveys? There are five themes that need to be addressed.

Reduced customer fatigue

One misconception is that AI surveys reduce fatigue because traditional surveys are too long. Not quite. Surveys are only too long if they are poorly crafted, but that has nothing to do with how the instrument is administered. Where AI does help is in creating an experience that is very comfortable for the respondent because it looks and feels like a chat session. The informality helps respondents feel more at ease and is well-suited to a mobile screen. The possible downside is that responses are less likely to be detailed because people may be typing with their thumbs.

Open-ended questions

There are three advantages to how AI treats open-ended questions. First, the platform we used takes that all-important first pass at codifying a thematic analysis of the data. When you go through the findings, the machine will have already grouped them according to the thematic analysis the AI has parsed. If you are using grounded theory (i.e., looking for insights as you go), this can be very helpful in getting momentum towards developing your insights.

Secondly, the AI also facilitates the thematic analysis by getting each respondent to help with the coding process themselves, as part of the actual survey. After the respondent answers “XYZ,” the AI tells the respondent that other people had answered “ABC,” and then asks if that is also similar to what the respondent meant. This process continues until the respondents have not only given their answers but have weighed in on the answers of the other respondents (or with pre-seeded responses you want to test). The net result for the researcher is a pre-coded sentiment analysis that you can work with immediately, without having to take hours to code them from scratch.

The downside of this approach is that you will be combining both aided and unaided responses. This is useful if you need to get group consensus to generate insights, but it’s not going to work if you need completely independent feedback. Something like GroupSolver works best in cases where you otherwise might consider open-ended responses, interviews, focus groups, moderated message boards or similar instruments that lead to thematic or grounded theory analyses.

The third advantage of this approach over moderated qualitative methodologies is that the output can give you not only coded themes but also gauge their relative importance. This gives you a dimensional, psychographic view of the data, complete with levels of confidence, that can be helpful when you look for hidden insights and opportunities to drive communication or design interventions.

Surveys at the speed of change

There are claims out there that AI helps drive speed-to-insight and integration with other data sources in real-time. This is the ultimate goal, but it’s still a long way off. It’s not a matter of connecting more data pipelines; it’s because they do very different things. Data science tells us what is happening but not necessarily why it’s happening, and that’s because it’s not meant to uncover behavioral drivers. Unless we’re dealing with highly structured data (e.g., Net Promoter Score), we still need human intervention to make sure the two types of data are speaking the same language. That said, AI can create incredibly fast access to the types of quantitative and qualitative data that surveys often take time to uncover, which does indeed bode very well for increased speed to insight.

Cross-platform and self-learning ability

There is an idea out there that AI surveys can access ever-greater sources of data for an ever-broader richness of insight. Yes, and no. Yes, we can get the AI to learn from large pools of respondent input. But, once again, without two-factor human input (from respondents themselves and the researcher), the results are not to be trusted because they run the likely danger of missing underlying meaning.

Creates real-time, instant surveys automatically

The final claim we need to address is that AI surveys can be created nearly-instantaneously or even automatically. There are some tools that generate survey questions on the fly, based on how the AI interprets responses. It’s a risky proposition. It’s one thing to let respondents engage with each other’s input, but it’s quite another to let them drive the actual questions you ask. An inexperienced researcher may substitute respondent-driven input for researcher insight. That said, if AI can take away some drudgery from the development of the instrument, as well as the back-end coding, so much the better. “Trust but verify” is the way to go.

So, this quote from Picasso may still hold true: “Computers are useless. They can only give you answers,” but now they can make finding the questions easier too.

Summary

The good news is that AI can do what it’s meant to do – reduce drudgery. And here’s some more good news (for researchers): There will always be a need for human intervention when it comes to surveys because AI can neither parse meaning from interactions nor substitute research strategy. AI approaches that succeed will be the ones that can most effectively facilitate that human intervention in the right way, at the right time.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

Dr. Szabo runs Curious, Critical Mass’ in-house research studio. He combines his research expertise with over 20 years in digital communications at the agency and client level. His doctoral research looked at new ways to apply design thinking approaches to extremely complex challenges, which comes in handy pretty much every day at Critical Mass. Mark has a very wide range of vertical experience, including the financial, insurance, legal, higher-education, fundraising, automotive, luxury and CPG sectors.

Marketing Land – Internet Marketing News, Strategies & Tips

(53)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.