When applying Artificial Intelligence (AI) to assessments and talent challenges, IO Professionals need to take several important considerations into account.
Last year, our IO trends survey showed that Artificial Intelligence (AI) was seen as the most important emerging trend among our respondents. IO Professionals have identified multiple possible applications of AI to the discipline, ranging from helping to score interviews to mining large data sets for new talent insights.
Despite these potential benefits, applying AI to talent challenges is not without risk. In today’s article we discuss 4 important caveats when using AI in talent management.
1. Always consider the science
Using machine learning, data scientists are able to uncover surprising and often tantalising insights and connections within large data sets.
The ability of AI to process massive amounts of data in parallel is a powerful tool for IO Professionals who want to enhance their capacity to find value in complex data.
However, we must never lose sight of the fact that IOPs are the final arbiters of psychological best practices in the talent value chain. In other words, data insights and the application of AI to talent decision-making should serve psychological science, not the other way around.
When such considerations when using AI are not considered, organisations and hiring managers run the risk of mistaking correlations for causation, trusting automatic judgements that are not aligned to scientific best practices and even reinforcing existing biases.
2. Prioritise transparent use of AI
When applying AI to talent decision-making, there are potentially thousands of data points that could be considered in reaching a conclusion. Among the important considerations to account for when using AI is a clear awareness of the analyses methods used by AI solutions.
Some proponents of AI promote the use of so-called “black box” methodologies, where the decision of which data points to process and how such processing ought to proceed is left outside of human review. However, when the outcome of such judgements can influence whether someone gains employment, a black box approach to AI can be deeply problematic.
Not only does such this approach invite claims of unfair treatment, but it also does not allow IO Practitioners to verify whether IO Psychology best practices have been applied in arriving at conclusions (i.e. accurate job profiling; considering the inherent requirements of the job; compensatory behaviours and abilities, etc.).
Instead of a black box approach to AI, a more transparent, “glass box” approach may be a more appropriate for use in talent decision-making. TTS’s assessment partner, cut-e (an Aon company), have long argued in favour of more transparency in using AI in assessments.
This would allow greater oversight of how AI comes to conclusions, the data that was considered (and ignored) as well as putting the IO Professional in the position of final decision-maker and reviewer of all AI-mediated decisions.
Indeed, as Aon’s Evan Theys argues:
Simply being able to use an automated decision process doesn’t remove the company’s responsibility for ensuring that it fairly assesses job-relevant skills. The complex algorithms used in AI can make selection decisions difficult to justify when their reasoning isn’t explained. And if your selection decisions can’t be easily explained, they are most easily challenged by applicants in court
3. Control for unreliable methods and techniques
Just because AI can be a useful tool in enhancing talent decision-making does not mean that it is necessarily perfect or without bias.
In fact, in early applications of AI to facial recognition, it was often found AI mimicked its human creators in their biases and prejudices. The question of how to use AI to reduce bias is an active topic of research for many scientists in the AI and machine learning field, but it is far from resolved. In the field of facial recognition and analyses especially, the jury is still out on whether AI is an appropriate tool to use.
Therefore, when AI is used to augment talent decision-making, the IO Professional must play an active role in understanding how decisions have been arrived at.
Important questions to ask include:
- Are our AI models including variables and relationships that could create bias in the hiring process against a certain group?
- Are we using AI processes that are well-established (e.g. content analysis of natural language) or are we relying on less reliable (and potentially harmful) techniques (e.g. facial recognition and analyses)?
- Was the data obtained and used in a way that is fully understood (and agreed-on) by the candidate and hiring managers?
4. Educate AI end-users
Perhaps one of the most important considerations when using AI for IO Professionals is overcoming resistance. Reluctance to leverage the benefits of AI inside organisations often result from a lack of knowledge about how AI works.
A necessary first step before implementing any AI talent solution is therefore to ensure that your end-users (managers and candidates alike) are well informed.
For instance, a recent study conducted by Aon showed that candidates who were well informed about how AI works were as likely to trust AI-mediated assessment processes as they were to trust those solely controlled by humans
As organisations generate ever more data and become ever more complex in structure and function, the need to incorporate AI into business processes will rise.
For talent management professionals and IO Psychologists alike, the potential benefits of using AI to augment current practices are very attractive.
We believe that using AI in responsible ways, while always respecting the central role that IOPs must play in talent decision-making, is the way forward.
If you’re interested in more of TTS’s thoughts on AI technologies and the future of our profession, why not reach out to us at: firstname.lastname@example.org?
Source: Theys, E. (2019). How to Avoid the Pitfalls of AI. Talent Acquisition Excellence, July 2019.