Embracing generative AI in people assessments: Reflections from the 2025 TTS Client Conference

Every profession encounters periods of acceleration where cherished assumptions shift, new technologies emerge, and industry leaders must rethink the way things are done.

For IO Psychologists, HR decision-makers, and talent professionals, generative AI has created such a moment. At this year’s TTS Client Conference, the conversations, debates and demonstrations showed that AI is undeniably reshaping our field, but its impact is only as meaningful as the human judgement that guides it.

In this article we provide an overview of the most critically important themes and insights to emerge from the conference.

AI as a predictive system

The conference opened with a keynote from Prof Richard Landers, the John P. Campbell Distinguished Professor of Industrial-Organizational Psychology at the University of Minnesota.

Prof Landers addressed one of the most pervasive misconceptions surrounding AI: That such technologies are a form of cognition. It is incorrect therefore to view AI and Large Language Models like ChatGPT as capable of actual thinking. Instead, they are best described as prediction engines trained on vast amounts of human data.

Once this key distinction between AI and human cognition is recognized, it becomes important for IO Practitioners to understand what AI’s predictive capacity is best suited for, and where it is most likely to fail.

Given that Generative AI is fundamentally probabilistic, not intentional, it stands to reason that end-users of such technologies should identify the conditions where such predictions will enhance talent decision-making. Conversely, it is important to be aware of conditions where using AI in this way may distort or even compromise talent decision-making.

This distinction was a primary theme throughout the conference, and informed the subsequent sessions.

Cheating, integrity and AI misuse

To the point above, few topics in modern assessment have generated as much debate as the potential impact of AI on assessment integrity, especially as it pertains to test-taker cheating.

A symposium on safeguarding assessments presented findings that contributed to a clearer understanding of this debate.

Substantial data reported on showed no systemic inflation of scores across assessments and nothing near the collapse of validity that some alarmist commentators have predicted. Instead, the emerging reality of how AI may be affecting test integrity is more nuanced.

Cheating at scale requires four conditions to align:

  1. A vulnerable assessment format
  2. Access to AI at the point of testing
  3. Motivation to cheat
  4. Sufficient technical skill to use AI effectively

This constellation of conditions is far less common than the public discourse suggests. More importantly, small behavioral interventions, such as honesty contracts, can significantly reduce dishonesty. This is, in part, no doubt due to the persistent influence of human factors such as positive peer pressure and social expectation that shape how people orient towards the temptation of using AI to cheat.

Put differently, AI changes the tools available to those who want to behave dishonestly, but it does not erase human judgement, ethics or behaviour.

The video interview paradox: Mitigation versus fairness

Research on retake limits in asynchronous video interviews was reported on during the conference. A common concern is that generative AI can be used to create “model” answers to asynchronous video interview questions that candidates can then parrot in their responses. A common mitigation strategy is to limit the number of opportunities to redo recordings of answers.

When candidates were given fewer opportunities to re-record their responses, their interview scores dropped sharply. At first glance, this looked like evidence that the mitigation strategy was working: Reduced retakes lead to reduced AI-enabled cheating.

But a deeper qualitative review of candidate experience revealed a more complex reality.

With fewer retakes, candidates were more likely to become:

  • anxious
  • less structured
  • reactive
  • more focused on performance management than authentic communication

Given this additional data, it seems clear that IO Practitioners need to be cautious when designing mitigation strategies for potential cheating behaviours.

Interventions designed to prevent misuse can unintentionally compromise fairness and candidate experience. This insight pushes the debate beyond compliance toward a more nuanced reading of the problem. Safeguards matter, but not at the expense of human dignity and equitable assessment conditions.

AI-enabled assessment centres

While many sessions highlighted conditions where AI may be either misapplied or misunderstood, there was also a powerful movement towards demonstrating what becomes possible when AI is used purposefully.

In the conference, TTS showcased a new prototype AI-supported assessment centre. In this assessment, candidates interact directly with a secure ChatGPT-based assistant as part of the exercise itself.

Rather than restricting AI, the design incorporates and encourages the use of generative AI. In this way, the behaviour of interacting with AI becomes the dependent variable, and reveals important candidate capacities.

In the data reported, candidate behaviour varied widely:

  • Some used AI as a “cognitive partner” and set it tasks such as querying, refining, validating, and iterating.
  • Others relied on AI uncritically and accepted the AI’s first answer without initial evaluation, thus outsourcing their own judgement entirely to the AI assistant.

A conclusion one can draw from the above is that how someone uses AI is itself a competency. This emerging capability is not merely digital literacy. Instead, it reflects judgement, curiosity, critical thinking, learning agility and metacognitive awareness: capabilities that have long been central to talent assessment, but historically difficult to observe directly.

In this way, AI can enhance the ecology of assessment, providing behavioural signals that traditional methods find hard to access.

AI-enabled success profiling

Along the theme of AI-enablement, research on comparing AI-generated success profiles and those developed by subject-matter experts was reported on.

Results were surprising: Correlations were far higher than anticipated. Therefore, when generative AI and human raters start with the same input data for success profiling, their end-products converge.

The key differences, however, lie not in accuracy of profile but rather in:

  • contextual judgement
  • organizational ownership
  • stakeholder alignment
  • and ethical interpretation

In other words AI can accelerate profile development, but cannot replace the socialization and decision-making processes required to embed those profiles in real organizations.

AI is therefore an efficiency tool for experts who are constructing success profiles, but IO Psychologists, HR Practitioners, and line managers ought to remain the primary interpreters of such profiles in order to provide context and applicability.

AI-enabled employee experience measurement

Another way in which AI has been embraced within the talent sphere was illustrated by a case study of how a large telecommunication client used TTS’s AI employee experience product partner, Welliba’s, AI-enhanced technologies.

By analysing publicly available, unfiltered digital records, the client accessed candid insights into employee sentiment that traditional surveys often obscured. This method revealed critical:

  • Frustration points
  • Cultural inconsistencies
  • Communication gaps
  • Emerging risks

Such insights demonstrated how AI can go beyond traditional measures of employee engagement and experience. In addition, the data gathered in this way was valuable to HR end-users precisely because of their unique contextual understanding.

Technology therefore broadened the flow of information while people provided the meaning.

Redefining competence in an AI-augmented workplace

An interesting and crucial correlate of increased AI-enabled processes and tools is that traditional indicators of proficiency may need to be redefined.

For instance, as more work tasks become AI-assisted, such as drafting communication, structuring presentations, or synthesising information, lower-skilled performers may see large gains using AI support, while high performers will see smaller increments.

This has two implications:

  1. Execution itself becomes less differentiating of true proficiency
  2. Judgement becomes more differentiating of competence

Competence therefore increasingly intermingles with the ability to evaluate AI outputs critically and knowing how to integrate such information responsibly. In other words, competence in knowing when not to use AI, and when to balance speed of delivery (prioritizing AI) with rigor (prioritizing AI plus human judgement) become critical.

As one conference delegate aptly summarized:

“AI makes it easier to look smart, but harder to actually be smart.”

A call to action: Who shapes the AI-enabled future?

In summarizing the above discussions, presentations and debates, Prof Landers captured an overarching theme:

  • “If IO Psychologists and HR Leaders don’t guide how AI is adopted, others will. And they may not share our values.”

Given that technologies such as generative AI inherits human biases, they are seldom truly neutral. Moreover, the use of AI inside organizations will reflect the values embedded in its design, deployment and governance.

Within IO Psychology, which has spent decades building scientific foundations for fairness, validity, bias mitigation and ethical practice, AI-enabled talent processes and assessments show promise and risk. The role of the IO Psychologist therefore needs to expand by acknowledging and embracing this new reality.

Final thoughts

Across the conference sessions, one overarching message crystallized:

The future of talent assessment and decision-making will not be defined by AI alone, but by how thoughtfully practitioners choose to use it.

IO Psychologists and HR Practitioners will shape this future through:

  • Evidence-based design
  • Ethical governance
  • Culturally informed adoption
  • Continuous validation
  • Courageous leadership

Generative AI is powerful but also context dependent and entirely reliant on the values of those who deploy it.

The future of talent decision-making and measurement will therefore not emerge from either AI or human experts alone, but from the partnership between the two, each amplifying the strengths of the other.

If you are interested in how TTS is leading the way in AI-enabled talent processes, or would like us to help your organization keep pace with such developments, contact us at info@tts-talent.com.