Enhancing IO Psychological practice: TTS at the 2025 SIOPSA Conference (Part 1)

At the recently held annual conference of the Society for Industrial and Organizational Psychology of South Africa (SIOPSA), TTS again demonstrated its thought leadership through diverse presentations ranging across assessment constructs, the ethical use of LLMs and AI, assessment utility, and the IO Psychology internship landscape.

Each session offered conference delegates an opportunity to engage with cutting-edge applications of IO Psychology in practice, all grounded in robust science.

In this review article, we provide an overview of the TTS-led presentations at the conference.

1. Integrating personality and ability: Advancing talent measurement for the future world of work

Our TTS research team, Dr. Sebastian Clifton and Dr. Angela Marsburg, presented a compelling session on the integration of personality and cognitive ability in talent measurement. While these two domains have often been treated as separate domains, contemporary research suggests that their combination offers a far richer and more predictive understanding of workplace performance.

Background and rationale

The presenters began by revisiting the well-established contributions of each domain.

Cognitive ability remains one of the strongest predictors of job performance, learning potential, and readiness for complex decision-making.

Personality, on the other hand, helps anticipate work style, motivation, and interpersonal approach, offering critical insights into values alignment, team dynamics, and derailers.

Historically, however, these predictors have been siloed, a separation increasingly at odds with current evidence.

From fragmentation to constellations

The TTS team highlighted landmark findings from Stanek & Ones’ (2023) large-scale meta-analysis, which synthesised data from over a million individuals across 50 countries, including South Africa.

Their Cybernetic Trait Complexes Theory (CTCT) suggests that personality and intelligence form constellations, integrated patterns that help individuals maintain stability while also enabling adaptability and change.
Examples include how high neuroticism can impair reasoning and memory or how conscientiousness supports knowledge acquisition.

Such constellations illuminate why treating personality and ability in isolation risks missing the entire scope of human functioning.

Applications in practice

The session moved from theory to practice, showcasing current TTS assessment approaches that put constellations such as those proposed by Stanek and Ones into practice within the local employee assessment space.

Good examples from TTS’s practice include constructs such as Digital Readiness and Growth Potential. These are measured through combining personality and cognitive results to inform decision-makers about complex and business-relevant capabilities.

As an example, Digital Readiness is a construct that strongly predicts success in digitally evolving workplaces. It blends personality and cognitive ability to measure how future-ready an individual is in digitally dynamic contexts. At its core, digital readiness is a measure of a candidate’s agility (flexibly adapting to change, combining cognitive working memory with personality-driven adaptability), learnability (a willingness to pursue growth through new learning, underpinned by both curiosity and reasoning ability), andcuriosity (a personality-driven openness to novelty and idea formation).

Validation studies showed that individuals scoring high on Digital Readiness were 27% more likely to be rated high in job performance, 34% in self-development, 48% in resourcefulness and creativity and 53% more likely to be rated high in implementing continuous improvement.

Another example of combining personality and cognitive abilities, TTS’s Talent Match Score, demonstrated how combining these domains explained far more variance in job performance outcomes (R = 0.36) than when assessed independently (R = 0.12).

Implications for IO Psychologists

  • Adopt integrative assessment: Combining constructs offers stronger predictive validity and fairer outcomes.
  • Shift from traits to constellations: Create job-based constellations that better reflect real-world contexts (e.g., structured vs. creative environments).
  • Support diversity goals: By balancing cognitive ability tests with personality indicators, practitioners can reduce adverse impact and enhance fairness in selection.

The researchers closed with a clear message: The future of talent measurement lies in moving beyond fragmented constructs toward integrated, constellation-based approaches. For IO Psychologists this means embracing more nuanced, evidence-based frameworks that capture the full range of human functioning within dynamic work environments.

2. The ethical use of Artificial Intelligence and Large Language Models: Implications for the IO Psychologist

In another thought-provoking TTS session, Drs Clifton and Marsburg tackled one of the most pressing issues facing the profession today: the ethical integration of artificial intelligence (AI) and large language models (LLMs) into organisational practice.

With AI advancing at a rapid pace, the presenters highlighted both the opportunities and risks for IO Psychologists, underscoring the profession’s unique role as ethical steward.

Background

AI can be described as systems capable of performing tasks traditionally requiring human intelligence, while generative AI and large language models are known for their ability to create new content and process language at scale. From Alan Turing’s early visions to the rise of modern generative models, the presenters traced a concise history of AI’s evolution, situating current debates in a wider context of world-wide technological shifts.

The bright and dark sides of AI

AI is already being productively applied in areas such as success profiling in assessment centres, where it can help identify competencies and predict potential. However, risks are significant: algorithmic bias, “hallucinations” (fabricated outputs), lack of transparency, and the danger of over-reliance on AI-generated content are all salient “dark sides” to be considered.

Examples include recruitment tools that inadvertently discriminated against women and legal cases where AI-generated outputs contained fabricated case law.

Therefore, AI is a powerful tool, but only when used responsibly and with human oversight.

Research and practical insights

A key highlight came from TTS’s own research, reported in the session, comparing AI-generated success profiles with those crafted by subject matter experts (SMEs).

While AI showed great promise in generating useful success profiles that overlapped with those of human experts, it also often lacked nuance and contextual sensitivity: Reinforcing that IO Psychologists remain the best critical interpreters and validators of such outputs. AI Prompting emerged as a skill in its own right, and “prompt engineering” may therefore soon belong on the well-rounded IO Psychologist’s CV.

Legislation, policy, and the South African context

The session also examined the emerging patchwork of global AI regulations. While Europe and parts of the Global North have taken strong steps toward formal frameworks, South Africa is still in its early stages. The National AI Policy Framework (2024) was cited as a foundational move, though gaps remain in enforcement, sector-specific standards, and collective governance mechanisms. This leaves IO Psychologists to navigate a challenging regulatory environment with limited formal guidance, making professional standards and ethical codes ever more vital.

Ethical considerations for IO Psychologists

The presenters offered a structured set of ethical guidelines particularly relevant to using AI and LLMs in professional practice:

  • Bias and fairness: AI must not reinforce systemic discrimination: Fairness audits are essential.
  • Validity and reliability: Without evidence of validity, AI may simply automate noise.
  • Data privacy: Compliance with POPIA, GDPR, and other laws is non-negotiable.
  • Transparency: Organizations must disclose when AI is being used and how it shapes decisions.
  • Human oversight: Final decisions must rest with people, not algorithms.
  • Explainability: If candidates or stakeholders cannot understand how a decision was made, trust and compliance are undermined.

Implications for practice

The overarching message was that IO psychologists are not passive consumers of AI technologies, but active custodians responsible for their ethical application. This includes:

  • Establishing internal governance structures for ongoing ethical review.
  • Embedding human-in-the-loop systems that balance automation with professional judgment.
  • Upskilling in AI literacy, algorithm auditing, and bias detection.
  • Advocating for fairness and validity in national policy and organisational contexts.

3. Unlocking high performance in a global fintech contact centre: A data-driven approach to selection strategies

In one of the most practically focused sessions of the conference, Dr. Angela Marsburg, Jeshika Bassett, JP van Zittert, Dr. Sebastian Clifton, and Wesley Gallant shared their case study of how data-driven selection strategies transformed performance and retention at a major global fintech’s multilingual contact centre.

Context and challenges

The contact centre faced a set of critical challenges: a staff attrition of 31%, performance ratings averaging 2.86 out of 5, and 12% of staff placed on performance improvement plans.

Compounding these issues were high time-to-hire rates (averaging 88 days), structural misalignment that drove excess hiring, and underutilisation of employee well-being resources.

To sustain performance and employee health, the organisation needed TTS’s help to create a more strategic, evidence-based approach to talent acquisition and management.

Method and research outcomes

The project began with the aim of defining what good looks like for key customer service roles.

By analysing employee assessment results from Aon’s ADEPT-15, TTS’s Customer Centrism SJT, and cognitive ability assessments, the team uncovered personality, situational judgement, and ability factors associated with stronger performance.

Employees who were hard-working, reliable, task-focused, organised, adaptable, and open to challenges consistently achieved higher manager-led performance ratings.

In addition, “directive” traits (leadership-focused, authoritative, and controlling), were also positively associated with performance in some contexts, challenging extant assumptions about the types of traits that are best suited to contact centre work.

By incorporating the SJT results into the existing Talent Match score, more predictive power for performance was obtained (from and R-squared of 0.02 to 0.13).

Assessment strategy and recommendations

 The researchers translated these findings into three clear recommendations:

  1. Integrate a Situational Judgement Test (SJT) into the Talent Match assessment statistic to measure applied judgment in role-relevant contexts.
  2. Include “directive” traits where appropriate, recognising their value in driving performance.
  3. Raise the Talent Match cut-score to 5 or 5.5, as candidates meeting this threshold demonstrated significantly higher performance outcomes.

These refinements, grounded in data, provided a more nuanced and accurate profile of what predicts success in a demanding customer service environment.

Implementation and impact

The results were striking. Within the first year of implementing this new assessment strategy, the organization experienced improvements such as:

  • 100% reduction in first-year exits (no early leavers),
  • 48% reduction in overall turnover (down to 16%),
  • 83% reduction in employees on performance improvement plans (from 12% to 2%),
  • A measurable increase in performance ratings (from 2.86 to 3.11).

Beyond performance, there were notable gains in well-being and efficiency: industrial relations cases stabilised, Employee Assistance Programme (EAP) usage rose by 33%, and time-to-hire dropped by 65%, cutting the process to 30–45 days (versus the previous average of 88 days).

Employee and leader feedback, collected through surveys and Net Promoter Score data, confirmed that these changes were positively experienced across the workforce.

Conclusion and broader implications

The presenters concluded with two key insights for IO practitioners: first, that integrating behavioural data with cognitive and personality assessments provides sharper talent differentiation, and second, that buy-in and transparency are essential. Employees are more engaged when they understand the “why” behind assessments.

The case also highlighted practical challenges, such as navigating data protection requirements across countries and adapting methods in response to shifting operational systems.

For IO psychologists, the session underscored the profession’s ability to not only diagnose organizational challenges but to design innovative, evidence-based interventions that deliver measurable impact on business outcomes.

4. Understanding and using test utility analysis: Optimising the business impact of psychometric assessment

Fred Guest, Managing Director of TTS, unpacked the concept of test utility analysis and how it enables IO Psychologists to demonstrate the tangible business value of psychometric assessments.

With nearly three decades of global experience in talent assessment, the presentation combined rigorous science with practical examples, equipping practitioners with tools to translate assessment outcomes into the language of business impact.

Test Utility, ROI, and Value-Add: What’s the difference?

These three concepts are related but quite distinct:

  • Test Utility: A statistical measure of how much an assessment improves prediction accuracy in selection or development. It focuses on predictive validity and its direct impact on outcomes such as productivity.
  • Return on Investment (ROI): A financial calculation that weighs assessment costs against monetary benefits such as reduced turnover, improved performance, or reduced external recruitment.
  • Value-Add: Broader outcomes beyond hard numbers—such as fairness, diversity, candidate experience, and employer brand alignment.

These distinctions provided a framework for understanding how assessments contribute not only to prediction but also to organisational strategy and culture. Examples of each include:

  • Test Utility. In a bank, cognitive ability testing for entry-level analysts improves average performance ratings from 3.2 to 3.8 within a year: an estimated R1.5 million annual productivity gain purely from improved prediction accuracy.
  • ROI. At a tech firm, investing R500,000 per year in personality assessments for leadership development reduces external hiring by 15%, generating R2 million in annual savings, representing a 300% return on investment.
  • Value-Add. In a multinational bank, situational judgement tests (SJTs) in graduate recruitment improves fairness and candidate experience, increasing diversity hires by 20% and enhancing the organisation’s values-driven reputation.

Taken together, these examples highlighted how utility, ROI, and value-add perspectives complement one another in making the business case for assessment.

The power of predictive validity

Central to the presentation was the reminder that assessments outperform human judgment.

Classic and contemporary research (e.g, Schmidt & Hunter; Kuncel et al.) shows that even mindless consistency, mechanically applying fixed, random predictor weights, outperforms expert judgment over time when predicting future job performance.

Importantly, mechanical or actuarial decision-making consistently outperforms expert judgment, improving hiring accuracy by over 50% and reducing hiring errors by 25%.

This message reinforces the scientific backbone of assessment utility: when validity is high and selection ratios are carefully managed, IO Psychologists can drastically increase the proportion of strong performers ultimately hired.

Practical tools

Practical methods for translating psychometric validity into financial terms, such as the Taylor-Russell tables and the Brogden-Cronbach-Gleser (BCG) formula, can help IO practitioners to estimate how many more successful employees an assessment yields and what that translates into in monetary value.

For instance, implementing a cognitive ability test for financial analysts was shown to unlock R8.8 million in additional value over three years, against a modest investment of R500 000 annually. In leadership hiring, similar calculations demonstrated ROIs exceeding 4 000%.

Such calculations not only prove the value of assessments but also help talent professionals and IO Psychologists to communicate effectively with financial and operational stakeholders.

As noted in the presentation, when IO Psychologists can show how an assessment will return eight times its cost in improved performance, their function can stop being seen as a cost centre and start being seen as a strategic partner.

Towards utility-optimized hiring processes

The presentation concluded with a practical look at the hiring funnel.

Unfortunately, typical hiring funnels are often frontloaded with low-validity methods such as CV screening or prioritise high cost methods like interviewing too early in the process.

With a more utility-optimised funnel, high-validity, cost-effective methods like realistic job previews, ability tests, SJTs, and structured video interviews are prioritised earlier, with concomitant practical and business advantages.

The principle to follow is to use the most cost-effective, predictive tools first. In this way, the risks of making bad hiring decisions are greatly diminished.

Conclusion

This session was a powerful reminder that psychometric assessments are not just scientifically defensible but are economically indispensable.

By applying utility analysis, ROI calculations, and value-add arguments, IO Psychologists can make a compelling case for the strategic role of assessments.

In doing so, they not only improve hiring outcomes but also strengthen their position as business-critical partners in organizational talent decision-making.

Final thoughts

These sessions reflected TTS’s ongoing commitment to turning robust science into practical solutions. By integrating talent constructs, navigating ethical AI, partnering with clients for measurable outcomes, and quantifying assessment value, TTS continues to shape the field of IO Psychology, locally and globally.