Ethical perspectives on AI and talent assessments: What are the hazards?

In the world of talent assessments and management, we are still exploring the full implications of Artificial Intelligence (AI) and large language models like ChatGPT for our discipline.

These implications are both of a practical (i.e. how can AI help of hinder our work?) and ethical (i.e. how can we use AI more responsibly) nature.

In previous articles, we have addressed both these aspects, for instance, how IO Psychologists can use technologies like ChatGPT in more ethical, responsible ways.

Recently, Dan Hunt, of TTS’s best-of-breed product partner, Saville Assessments, examined similar questions through the lens of recent developments in large language models and machine learning.

In this article, we summarize and expand on those thoughts.

AI and ChatGPT will influence how we work.

It seems perhaps too obvious to mention, but the groundbreaking technologies of AI and large language models such as ChatGPT are not only persistent but are likely to find increased avenues of application as they are refined and improved.

It is worth noting that such improvements are happening at a vastly more rapid pace than many are used to, however. Gone are the days when we had to wait years for new software versions to roll out. Now, new AI models and large language generators like ChatGPT can be released within months, partly due to the fact that most users of the internet (around 5 Billion at conservative estimates) can potentially serve as beta testers

This characteristic of rapid deployment and improvement means that if AI does not yet influence the particular work you do, it soon will.

What about potential AI hazards?

As Hunt points out, the introduction of any new technology will inevitably encourage both negative, pessimistic views of its consequences (e.g. AI will make us all redundant) as well as more optimistic ideas (e.g. AI will remove boring work from our diaries).

Irrespective of one’s intuitive position on this spectrum, it is important to recognize that like all new technologies, there are likely to be pitfalls and hazards along the way, some of which will perhaps cause the decline of old industries and practices (much as digital industries supplanted analog ones in the 70s and 80s) while giving way to new opportunities.

For instance, according to Hunt, British Telecom is planning a reduction of its workforce by as much as 55,000 by 2030, including 10,000 jobs directly affected by AI.  PWC have also suggested that while the potential boost from AI to the world’s economy by 2030 could be as much as $15 trillion, by the mid-2030s, 30% of jobs could be at risk of automation, while 44% of workers with low education could be at risk in the same timeframe.

But once we all recover from an initial period of adjustment and disruption, how will AI likely shape our workplaces and work?

An encouraging development within the world of AI and large language models is a strong focus on the ethics of using AI to enhance and sometimes replace work. Ethical concerns about the growth of AI are far more central to many conversations among industry leaders than ever before.

Most, if not all of the product developers, like Open AI are actively trying to work with governments and industry to ensure a more responsible use of their technologies (e.g. recent initiatives to exclude artistic work from large language and AI graphics generators).

AI, ChatGPT and talent assessments.

The fact many AI companies are requesting world governments to introduce regulations and controls suggests that this technology is different from many that came before.

Hunt argues that it is nonsensical (and futile) to consider just one argument about the utility and desirability of AI in the workplace. Instead, it is perhaps more useful to consider the multitude of potential effects that AI’s impact on human life, and more specifically, work, will have.

In the sphere of talent assessments, it may be productive to examine questions such as whether AI should be used to screen applications and reject candidates, as well as examine how we ensure that such tools don’t inherit our biases or are used without appropriate human supervision and recourse.

More complex ethical dilemmas arise when AI (and human) evaluators scrutinize information outside of the immediate application universe, such as candidates’ social media profiles, although it is worth noting that such dilemmas are not inherently related to AI, but more to the basic ethicality of such practices.

This final point is an important one. And, as Hunt also argues, our guidelines and debates around the use of AI and ChatGPT in the workplace will inevitably follow similar paths as our debates about how people should behave at work.

At its core, therefore, the debate on how AI will and should shape our work is a debate about how we, as the inventors of this technology, believe it ought to be used.

In doing so, it is up to us to ensure that the implications of AI and similar technologies are steered toward avoiding harm and not to create new problems for future generations to solve.

At TTS, we look forward to partnering with our clients and product providers in the exciting and sometimes complex world of AI and talent assessments.

A key insight, one which Hunt shares is that we can only navigate such complexities by being active partners in the conversations and debate about AI in the workplace.

If you are interested in the implications of AI and how these might shape your use of assessments, why not speak to us at info@tts-talent.com?

Source: Hunt, D. (2023) It’s still one day . Saville Assessments research report.

.