Social distancing and the advantages of unsupervised assessments

Online, unsupervised talent assessments are a staple of IOP practices throughout the world. Indeed, as we face new challenges such as the COVID-19 epidemic and necessitated social distancing, it is even more important to provide organizations with solutions that can be implemented remotely and without the need for face-to-face contact.

While we have discussed new technologies like virtual interviewing and their application in times of social distancing, today we examine a far more established application: online, unsupervised assessments.

Despite the wide acceptance of online, unsupervised assessments, some IO practitioners may still feel uncomfortable to deliver assessments without some form of oversight or proctoring. In today’s article, we critically examine common concerns about unsupervised, online assessments and see if they are warranted. We also discuss how online, unsupervised assessments can help talent professionals enhance their service to business stakeholders.

The case against unsupervised assessments: Equivalence and cheating

Since the advent of online assessments, IOPs have investigated the equivalence of proctored versus unproctored testing. The research literature is unambiguous in regard to differences of scores and measurement between these two modes of delivery: There are no significant differences.

For instance, Kriek and Joubert (2009) found remarkable similarities between supervised and unsupervised occupational personality measures in a high-stakes, selection-oriented project. Even in studies that have shown lower validity for certain unsupervised assessments, their greater reach and capacity to access a larger talent pool increase overall utility of the entire assessment process. This increased utility often has the effect of off-setting any validity disadvantages (see for instance, Tippins, 2009).

Of course, one of the primary reasons consumers of assessments are potentially worried about the equivalence of different modes of test delivery is the issue of candidate cheating.

It is worth noting that cheating in employment-based assessments is a global phenomenon and appears to be independent of whether an assessment is supervised or not. Factors that do influence attempts at cheating are more likely to lie within the individual, as well as with that person’s estimation of their likelihood of being caught.

But in and of itself, simply supervising an assessment in no way guarantees that a candidate motivated to cheat won’t attempt to do so anyway (Tippins, 2009).

And for anyone who has actually completed online ability or personality assessments, the reality of attempting to cheat versus the perceived convenience is quite different Given the timed nature of ability assessments, for instance, most if not all attempts at cheating will result in only one realistic outcome: the candidate running out of time.

Concerns around the release of test items into the public domain are legitimate, but largely addressed through sophisticated technologies like item banking and creating equivalent forms of the same assessment. Such measures assume a well-developed and maintained technological infrastructure however, mandating a careful appraisal of assessment product providers’ capacity to implement such solutions. For instance, at TTS, a key consideration for us in taking onboard a new assessment provider is their capability to use item banking and similar strategies to counter public domain release of test items.

In conclusion, the question of cheating and equivalence is a complex one, but not without good, research-based answers. And when evaluating such research, it seems clear that concerns around equivalence of unsupervised, online assessments ought to be laid to rest.

And as long as robust, best-of-breed assessments are used that offer item banking and other technological solutions to item disclosure, end-users of unsupervised assessments can be assured of the integrity of their testing process.

The power of online assessments

If online, unsupervised assessments are reasonably safe from cheating and have equivalence with their supervised counterparts, what are some of the benefits they hold for IO Practitioners?

As already mentioned, unsupervised assessments allow recruiters and talent professionals greater access to talent pools. Employers need not limit their search for talent to only those areas where testing venues are available.

Instead, the entire internet-enabled world is a potential talent pool.

Other key benefits of online, unsupervised assessments are:

  • Remote assessment capacity. More now than ever before, employers need selection methods that are accurate and scientifically defensible but won’t require face-to-face contact. Given the requirements placed on all of us to contain the spread of COVID-19, this capacity is a tremendous advantage for any business concerned about the continuity of its selection and development processes.
  • Cost benefits. Employers need not spend additional budget on hosting candidates, catering, maintaining infrastructure, or hiring invigilators. This savings is of course passed on to clients who use outsourced services that use only online, unsupervised assessments as well.
  • Candidate experience. Various study groups commissioned by SIOP have found that candidates have positive experiences of online, unsupervised assessments. This stands to reason: such assessments allow candidates to choose the time and place of their assessment. Also, there is no need for often-inconvenient and inefficient time away from home or office, as is the case with supervised assessments. Another risk that is eliminated by unsupervised assessments: The risk of candidates who are vying for the same position inadvertently bumping into each other at a test venue. This and similar reputational risks associated with error-prone supervision logistics are eliminated by using online, unsupervised testing.
  • Efficiency. Supervising assessments tends to be inefficient on several levels. For one, supervision requires personnel who could productively be deployed elsewhere. Also, the time wastage required for candidates to travel to and from assessment venues can largely be avoided through online, unsupervised testing.
  • Turnaround time and time-to-hire. While both supervised and unsupervised online assessments allow for immediate capture and scoring of assessment results, unsupervised assessments have an additional benefit: Because assessment links can instantly be sent to candidates as soon as the client initiates a project, the turnaround time for reporting is decreased. This in turn helps to reduce the time taken to hire talent, which is an increasingly important metric used to evaluate recruitment functions throughout organizations. Supervised assessments, being dependent on candidate (and invigilator) availability, offer no such benefits.

Final Thoughts

In this article, we examined the advantages that online, unsupervised assessments hold for IOPs given the realities of social distancing and talent selection best-practices in general.

It seems clear from available research data that concerns about cheating and equivalence with unsupervised assessments are largely unfounded. While such issues may have been relevant in the early days of online testing, modern technology and the changing landscape of assessment end-users are bound to completely normalize this modality.

And once the advantages of unsupervised assessments are compared to possible disadvantages, a clear picture emerges: Organizations and IOPs can benefit immensely from adopting online, unsupervised assessments as their go-to solution for talent selection and development assessments.

Here at TTS, we have been pioneers of online, unsupervised assessments. Our confidence in this solution is borne out by robust, replicated scientific data on the validity of unsupervised assessments. Our client organizations have also benefited from this approach, both in terms of cost-saving and efficiency of delivery.

If you are interested in enhancing your assessment processes by moving away from inefficient supervised approaches without sacrificing accuracy, why not contact us at


Joubert, T., & Kriek, H.J. (2009). Psychometric comparison of paper- and-pencil and online personality assessments in a selection setting. SA Journal of Industrial Psychology/ 35, 1-11.

Tippins, N. T. (2009). Where Is the Unproctored Internet Testing Train Headed Now? Industrial and Organizational Psychology, 2, 69–76.

Walker, J. M., & Moretti, D. (2018). Recent trends in preemployment assessment. Visibility Committee of the Society for Industrial and Organizational Psychology: SIOP, OH.