“The latest technological push in employment testing has been toward mobile internet testing” writes researchers Danielle D. King, Ann Marie Ryan, Tracy Kantrowitz, Darrin Grelle, and Amanda Dainis in a recent study published in the International Journal of Selection and Assessment. The study, Mobile Internet Testing: An analysis of equivalence, individual differences, and reactions, is timely because a major focus with talent acquisition is the shift to mobile everything. Many talent acquisition leaders believe that everything from the ATS process to pre-hire assessments to onboarding should be mobile. This study highlights important points that must be considered in the shift to mobile.
Overview
Mobile devices have become an important part of our everyday lives, from communicating and socializing, to getting news and business alerts. And even the world of talent acquisition is taking hold. But while this may sound like uncharted territory, the study makes a strong argument that this isn’t the first time in talent acquisition we’ve seen changes. “The threats of new technologies for ability testing are that measures change as they are transferred to another test medium,” the study notes. But just as pre-hire assessments have evolved from paper and pencil to computerized testing and from onsite testing (proctored) to web-based assessment (unproctored), mobile assessments is just be the next evolution in the hiring process – not a reinvention of it. Understanding the benefits and drawbacks of mobile internet testing is critical to ensuring fairness and performance just like the shift to computer-based testing and unproctored testing.
There are tradeoffs – the study does note that candidates will be faced with challenges using mobile internet testing (MIT) versus personal computer internet testing (PCIT). These challenges include “limited screen size, limited input methods, and motor challenges in navigation that may lead to poorer performance on mobile devices.” However, the benefit is that users will have better access to hiring assessments, and may feel more confident taking a pre-hire assessment on a mobile device since it might be their primary access to the internet. Theoretically, MIT’s could also help “futureproof” the hiring assessment process.
The study focuses on three key purposes:
- “To examine the measurement equivalence of different types of tests when completed using a mobile handheld device versus on a PC.” In other words, if a candidate were to take a test on a mobile device, and that same test on a PC, would the scores vary, or would they more or less be the same?
- The study set out “to examine whether attitudes and other individual differences influenced responses and reactions to a mobile device,” nothing that “it is useful to investigate whether mobile device efficiency, mobile device anxiety, and mobile device attitudes influence responding.” To put it plainly, the second goal was to understand how test takers felt about actually using mobile devices. Sure, they may want they idea of an MIT, but will they actually enjoy using it, or will it make them anxious and feel inefficient?
- The study aimed to “examine reactions to MIT; while one might expect positive responses to ‘anytime, anywhere’ access to testing, the contexts in which one typically uses mobile devices and the differential display characteristics may be more distracting and, therefore, reactions may not be all that positive.” So, once a candidate has actually used an MIT, would they want to use one again, or was it too far off from how they prefer to use their mobile device?
In order to achieve this, the study was conducted with 306 university students. Of these students:
- 76.7% of the participants were female
- 28.9% were ethnic minorities
- Average age = 24
- 49% of the participants typically access the internet on a PC
- 96% of the participants owned an internet capable mobile device
- 65.6% of the participants reported spending more than 2 hours each day on their mobile device.
The test divided the participants in two groups, with one group taking an MIT and the other taking a PCIT. After they completed their respective assessments, they returned in 3 weeks to take the other test. Three tests were used: a customer service orientation test, cognitive ability tests, and situational judgement tests.
Results from the Study
Measurement Equivalence
The study found that MIT and PCIT were equivalent for the supervisory situational judgment test but not for the cognitive ability test.
The reason cognitive abilities might have tested differently from supervisory tests are two reasons that relate to each other: item length and the type of questions being asked. The study notes that “item length would affect transparency across mediums and that this would be the cause of non-equivalence on the longer assessments, rather than something about the constructs ability to be assessed per se.” So, in other words, cognitive ability tests, due to how long the questions can be, will take up more screen real estate – something that, on a mobile device, is incredibly valuable. This might cause a test taker to miss part of the question, misinterpret what they’re reading because they have to scroll back and forth to read the full question, or miss an available answer. Furthermore, because the type of question being asked is cognitive focused, the candidate in question might have a harder time answering it on a device that doesn’t cater as well to these situations as a PC. In other words “this means that a particular individual has a greater chance of getting an item wrong that he or she should have gotten correct due to the format the test is taken in and not a random influence.”
Attitudes Towards a Mobile Device
The study also found that anxiety related to mobile testing impacted test performance as well as reactions to the tests taken. The study recommended that desktop alternatives should be offered whenever possible because if a candidate is uncertain about a mobile assessment their scores will be negatively impacted.
Reactions to Mobile Internet Testing
The study found that test takers reported significantly higher test ease and improved chance to perform retains when using a PC compared to a mobile device. They believe they’ll perform better on a desktop, and feel much more comfortable in doing so. However, the study did note that “any differences between [mobile and desktop] might dissipate as individuals gain familiarity with assessments delivered on mobile devices.
Observations from the Study
- Talent acquisition leaders should consider the possibility that mobile internet testing may result in a less predictive outcome which can mean performance issues in new hires. Careful consideration of the trade-off between “anywhere, anytime” testing versus the right assessment model that drives the best available quality of hire is warranted.
- As the study found, the type of assessment being delivered via mobile is important to the success of the assessment and just because it is delivered via mobile does not mean it is effective.
- Talent acquisition leaders should consider the trade-off between PC Internet Testing and Mobile Internet Testing as outlined in this study. Today, the best approach is to offer the candidate both options and let the candidate decide based on their preferences and views.
- Much like the shift from paper/pencil to computerized testing, the improvement in the user interface, assessment test design, and technology will eventually make mobile based testing equivalent to PC based testing across all current types of assessment forms.
Business & Finance Articles on Business 2 Community
(101)