Three key principles of assessment best-practice

At TTS, we see our central role as helping our clients make better talent decisions. In doing so, we have to stay on top of the latest developments and science within the IO Profession. Here are three crucial principles we hold dear in delivering the very best advice to our clients:

Principle #1: Algorithms beat subjective judgements

For years it was the responsibility of IO Psychologists to use their subjective insights to write integrated reports based on professional judgement. This state of affairs seems to pass initial scrutiny. Surely it makes sense that IO Professionals, who have studied for many years, should bring their expertise to bear on making judgements about candidates’ fit-to-role? For much of the early history of IO Psychology, this was indeed the consensus view: That only professionals should (and must) make the final judgements on assessment results.

However, numerous studies have since shown a very different reality. As it turns out, when professional judgement (also called “clinical judgement”) is compared to more mechanical, algorithmic decision-making (i.e. using a stable, unchangeable principle of selection based on set calculations), algorithmic judgments tend to be far more accurate.

This finding becomes even more clear when judgements are compared over time and large data sets. The upshot: if IO professionals prefer more algorithmic approaches to making judgements about assessment results, they will gradually become more accurate in their judgements as well as outperform more subjective, clinical decision-making practices.

More recent meta-analytic studies (e.g. studies conducted by Kuncel et al., 2013), underline the superiority of algorithmic judgements of data. Subjective judgements fail because they are inconsistent, rely on factors other than key data points (e.g. favourite competencies, clinical interpretation, mood of the interpreter), and cannot reliably replicate judgements across time.

At TTS, we incorporate our expertise in understanding competencies, the predictive power of assessments, and the latest research in IO Psychology into decision algorithms that can predict a candidate’s match to a specific role. In doing so, we avoid the dangers and pitfalls of clinical judgements, and consequently help our clients make better quality decisions in the long run.

Principle #2: Start with the correct filtering methods

When looking for the best candidates in a recruitment pool, an important puzzle to solve is how to pare down a large group of applicants to a more manageable number. While some recruiters use CVs or references to reduce their initial talent pool, the science of selection argues against such approaches.

Why? Studies show time and again that CVs and reference checks are remarkably poor predictors of success at any particular job. They often either under- or over-sell a candidate’s fit to a role, and are replete with subjective traps that can sway the decision-maker, such as poor or sophisticated design, irrelevant information, and formatting.

Understanding the risk such methods hold, some companies have opted for interviews as an initial screening method. There are two problematic assumptions built into this approach, however:

  • The assumption that interviews are good predictors of job fit
  • The assumption that interviews are relatively cheap methods of selection and thus well suited to initial screening of applicants

Research again paints a very different picture. When IO Psychologists investigate the accuracy of interviewing in predicting job performance, results are often disappointing.

Traditional manager or HR-led interviews can rarely distinguish between potentially successful or unsuccessful candidates. The primary reason: interviews suffer from all the many interpersonal biases that are present in any social encounter. Also, interviews mostly reference subjective judgements rather than algorithmic judgements, with predictable results (see above).

Moreover, the assumption that interviews are cheap or affordable remains largely unexamined in most organisations. Few recruiters actually calculate the true time and opportunity costs of a panel interview, thus perpetuating the myth that interviews have no (or relatively little) monetary cost.

The outcome of using the wrong screening tools early in a selection process is simply this: good candidates may be screened out, thus losing them to competitors. Conversely, poor-fit candidates may well be selected, risking bad hiring decisions or wasting money on assessing applicants that were never going to be fit-for-purpose.

So, if CV checks, references, and interviews are poor initial filters in a talent selection process, what is a better alternative?

IO Psychologists have long known that ability tests are cost-effective and highly predictive alternatives to these non-ideal methods. For instance, Situational Judgement Tests, work samples, cognitive measures, and circumscribed behavioural measures such as the Saville Work Strengths are all low-cost, high-prediction alternatives to interviews.

It is important to keep in mind that adverse impact of such measures need to be managed, especially if alternative criteria such as equity are also important. Well-established strategies such as parallel lists, banding, and others can be utilised by IO Practitioners to not only ensure that equity targets are achieved, but that the selection process retains scientific validity and rigour.

So, using these affordable, quick and efficient assessments procedures as mass screening devices will yield a far more reliable and higher-potential pool of candidates than interviews ever can.

When robust psychometric assessments are used earlier in the screening process, and interviews are kept for the final stages, organisations have the privilege of making decisions about the best possible candidates, rather than the least-worst.

Principle #3: Getting close to the job

More unsophisticated assessment practices often measure multiple generic, non job-specific capacities or competencies in an attempt at getting an “overall” or “summative” view of a candidate. The reasoning perhaps is that if we measure as many competencies as possible, we are unlikely to make bad talent decisions.

But again, our experience shows different. In the modern world-of-work, where jobs are extremely complex (not to mention being created on an almost monthly basis as new technologies and trends enter the workplace), it is more important than ever to understand what good looks like in any given role.

Without a thorough, well-thought out profiling process, where all the elements (i.e. experience, potential, capacity, foundational psychological dynamics) are well described and matched to a particular job, assessment becomes a shot in the dark.

If we truly understand the full width, breadth and depth of a job, we can accurately match critical competencies with scientifically-credible methods of measuring a person’s likely capacity and potential in these. It is at this juncture that we add the most value to our clients and where making better talent decisions become not only possible, but highly likely.

Final thoughts

To summarise: IO Practitioners who want to help their business leaders make more effective, predictive, and credible decisions about talent can do well to heed three key principles of best practice:

  1. Use mechanical judgements, incorporating scientifically defensible algorithms, to make judgments among candidates. Avoid the temptation of more subjective, clinical forms of decision-making
  2. Make sure that your early talent filters identify the very best candidates to work with. Avoid methods that have low predictive power, like CV screening and interviews, that are likely to erroneously screen out good talent or select poor fits.
  3. Get close to the job. Avoid vague or non-existent job descriptions. Measure the essential competencies that predict success in a role for maximum predictive power.

If you’d like to find out more about assessment best practices, or how TTS can help your organisation make better talent decisions, why not drop us a line at: