Using ChatGPT responsibly: Key principles and best practices for IO Psychologists

Artificial Intelligence (AI), with its machine learning capabilities, has become a vital tool across many different professional sectors, not least of which is IO Psychology and talent management.

AI-driven language models like OpenAI’s Chatbot Generation model (ChatGPT) promise as we’ve mentioned in previous articles, to be powerful assets for professionals in their work.

For instance, ChatGPT could be used to enhance knowledge management by providing summaries of a variety of sources and articles for easy consumption or improve communication by upgrading a team’s writing prowess. In addition, language models like ChatGPT are especially adept at helping psychometric specialists with report writing and interpretive work.

And while the uses of technologies like ChatGPT increase, professionals have also begun to ask important questions about its responsible use. In this article, we cover some key principles that can guide talent professionals in the responsible use of ChatGPT, using its report writing powers as a practical example of how this may be achieved.

Principle #1: Acknowledge limitations 

In report writing, for instance, ChatGPT can be tasked with producing draft reports based on the provided data and guidelines. For instance, in assessment-based development reports, users can feed data points and specific observations into the model, asking it to create a write-up. This can include sections like an executive summary, analysis of results, and recommended development interventions.

But while ChatGPT is a powerful tool, it is not infallible and does not possess the nuanced understanding and professional expertise of a trained I/O psychologist. Its response to tasks like report writing is an amalgam of different sources, both professional and lay, and it does not necessarily distinguish between these.

As a result, outputs should always be reviewed and refined by a human professional, who must take ultimate responsibility for the final output. By acknowledging the inherent limitations of ChatGPT, users need to acknowledge that it is a supportive tool, not a standalone solution or a replacement for professional expertise.

Returning to our report writing example, when ChatGPT is used to generate summaries and recommendations based on assessment results, the content thus generated must be viewed as a preliminary write-up only, and any inferences drawn from the data (e.g. development recommendations) must be checked thoroughly by a professional and evaluated against IO Psychology best practice and scientifically rigorous standards.

In addition, ChatGPT has its own inherent tone and stylistic ways of producing written content. Psychologists using it to produce initial write-ups for assessment reports should carefully review, contextualize, and polish the generated text to ensure that it accurately represents their own or their organization’s nuances and professional style.

Principle #2: Vigilance against bias

Because AI models like ChatGPT are trained on public data and a variety of sources, they can unintentionally replicate societal biases that were present in their training data.

Therefore, users must be vigilant for potential bias in ChatGPT’s outputs, particularly when using it for tasks such as producing content for reports or assessment interpretation. The text should be critically evaluated for fairness and neutrality, and actions should be taken to mitigate any biases identified.

In our report writing example, bias may be introduced based on a variety of sources, such as the gender of the candidate, nationality, and job roles. Although models like ChatGPT are improving continually in reducing bias, users will still need to carefully review proposed content for bias.

This risk again re-emphasizes principle #1 above, in that limitations are best acknowledged than ignored.

Principle #3: Respect for intellectual property

Respect for intellectual property is a cornerstone of ethical practice. But the use of language models like ChatGPT complicates this substantially. For one, because training sources used in the original setup of AI language models are so diverse and extensive, it is not obvious when the source material is used directly by ChatGPT or processed in a more original or creative fashion.

To counteract this risk, users may check content produced by ChaptGPT against well-known and popular plagiarism detectors found online, or if these are unavailable, they can consult colleagues to cross-check for plagiarism.

Perhaps a more problematic issue in regard to intellectual property rights is the input of propriety information into ChatGPT. For instance, if data from personality measures are input, it may inadvertently lead to copyright infringement if the language model alters or re-uses this information in other contexts.

Professionals can mitigate this risk by carefully selecting data that are input into the language model and avoiding the use of “core” propriety information when partnering with ChatGPT.

Finally, when outputs do resemble actual sources or the proprietary work of others, they should be referenced, and due credit given.

Principle #4: Transparency in presentation

Perhaps the most important principle of responsible use of ChatGPT is to maintain transparency throughout. In general, using ChatGPT should be treated the same as when consulting an expert colleague or academic source. Just like such instances, users ought to acknowledge the role of ChatGPT in their reports or assessments.

By clearly communicating to clients or other stakeholders when using AI tools like ChatGPT for generating reports or assessments, talent professionals can avoid any potential risks related to transparency.

To acknowledge the use of ChatGPT, users may either use a citation such as:

OpenAI. (2023). ChatGPT-4 [Software]. Available at

Alternatively, they may choose to add a statement in their work like “Portions of this work were generated using ChatGPT-4, an AI language model developed by OpenAI.”

Returning to our reporting example, it may not be necessary to cite ChatGPT directly in reporting, but it may be advisable to inform consumers of assessment reports that ChatGPT or AI language models have been employed in preparing elements or sections of the final product.

Final thoughts

The use of ChatGPT can substantially reduce the time taken to compile initial drafts of reports, scour through sources for insights, and provide possible interpretations of assessment results. In turn, it can free up professionals to engage in more complex and potentially rewarding tasks.

However, it’s crucial to note that these outputs should be carefully reviewed, re-contextualized, and correctly acknowledged by the professional user to avoid risks associated with inappropriate use.

AI language models like ChatGPT can be a tremendous resource for IO Psychologists and talent management professionals. They can assist in enhancing professional efficiency and productivity and may, in time, even serve as our virtual proxies in a host of professional tasks.

However, it is crucial to use these capabilities responsibly and always be alert to emerging risks that may be relevant to professional practice.

In addition, it is important for IO Psychologists and assessment professionals who use ChatGPT to always comply with established professional guidelines and codes of conduct. This includes respect for the dignity and rights of all individuals, the pursuit of fairness and justice, and the commitment to professional competence.

As with all tools and techniques used in the profession, the use of ChatGPT should be guided by the overarching aim of promoting the well-being and performance of individuals and organizations.

For more on IO Psychology best practices be sure to read our other news articles or contact us at