The ethical and practical implications of generative AI in assessment centres

In a thought-provoking session at the ACSG 2025 conference, TTS’s Dr. Danie Oosthuizen tackled the rapidly emerging topic of generative AI in assessment centres, exploring not only its practical applications but also the possible ethical dilemmas it poses. In this article, we provide a summary of the key points.

Promise and potential: AI in talent assessment

AI applications like Large Language Models (LLMs) have transformative potential in assessment processes. From real-time feedback and scenario generation to automated scoring and simulation design, AI offers a range of possible uses:

  • Structured scenario creation aligned to specific competencies
  • Reduced cognitive load for assessors via automation of routine tasks
  • Dynamic assessments that adjust in real time based on candidate input
  • Greater interactivity and realism, enhancing predictive validity

AI is not merely a tool, but an emergent competency, requiring talent professionals to develop AI literacy to wield these tools ethically and effectively.

Managing cognitive load with AI

Drawing from Cognitive Load Theory (CLT), AI can support assessment projects by:

  • Reducing extraneous load (e.g. by helping with repetitive tasks and manual scoring)
  • Enhancing germane load (e.g. focusing attention on key decision-making)
  • Managing intrinsic load (e.g. simplifying assessment structure without diluting complexity)

These insights positioned AI as a cognitive partner in the assessment process, freeing up mental bandwidth for both candidates and assessors.

Ethical frameworks in using AI in assessments

The core of the session examined the use of AI through the lens of two prominent ethical theories:

  • Deontological Ethics: Emphasizes duty, fairness, and universal principles. In the context of AI, reducing candidates to algorithmic profiles violates the imperative to treat individuals as ends in themselves.
  • Utilitarianism: Weighs morality based on outcomes and collective benefit. Here, AI use is justified if it enhances overall system efficiency. But, it raises red flags when it produces stress or systemic bias.

Emerging questions

To ground the theory in lived practice, the session included a series of expert views from different perspectives:

  • Industry leaders: Are beginning to frame generative AI usage itself as a workplace skill but seek clearer boundaries on ethical use.
  • Assessment designers: Face new challenges in ensuring simulation exercises remain valid in the age of AI-augmented responses.
  • Ethics advocates: Call for collective, proactive guidance to manage rapid advancements before misuse of AI technologies erodes trust.
  • AI Technologists: Acknowledge that AI can help individuals perform better but warn this may blur the line between competence and the appearance of competence.