In a recent article, we posed the question of what assessment reporting would look like if we applied best practice data visualisation principles. At TTS, we’re investing in taking assessment reporting into the future by providing online, interactive reporting of assessment results.
In today’s article, we take a closer look at four specific design considerations that assessment providers ought to take seriously when designing better assessment reporting:
1. Provide at-a-glance meaning
The best kind of data displays are concise and simple. They provide users with immediate answers to the most important questions they would want to ask of the dataset. When reporting assessment results, for instance, it is important to consider what end-users would want to see at first glance.
First glance information is the key insights that you want users to walk away with, and as the name suggests, reporting design should facilitate an easy, at-first-glance intake of information.
2. Avoid pie charts
With very few exceptions, pie charts are often the least meaningful way of representing results.
Especially when representing more than two categories of data, pie charts tend to obscure, rather than reveal information about the data. Mostly, the different categories are too tiny, and the colours too confusing for an average user to make productive sense of the data.
Bar and line graphs are far better alternatives and are easier to make sense of, even with relatively complex data sets.
3. Allow for data filtering
Data reporting that overwhelms users with stacks of raw data seldom accomplishes the goal of elegant, concise reporting. In assessments, we often deal with multifarious data from different sources.
Consequently, the temptation may be to provide end-users of assessment reporting with as much data as possible, as soon as possible. But that would be a mistake. Data filtering is a better option. A distinct advantage of online methods of data reporting is that end-users can interact with data rather than simply being passive consumers of results. Allowing end-users to filter data (and in fact having such filters applied initially to simplify data sets) can overcome the problem of overwhelming them with too much data.
An added benefit is that users can become more actively involved in understanding the data presented through filtering results to answer specific questions they may have.
4. Use colour sparingly
Although the use of colour can aid end-user’s understanding of assessment results, a careful balance is important. Overuse of colour can confuse and misrepresent data.
For instance, robot colour schemes may oversell the danger (or positive nature) of particular results. Presenting text on colourful backgrounds can make information hard to read, and thus easier to ignore.
Applying the right amount of colour to reporting is an art, but the general principle is that less is more. Avoid robot colours unless the results are unambiguously good or bad, and remember to check for readability of all numbers and text in the report.
At TTS we’ve worked with our clients in understanding their assessment needs as well as the best possible ways of representing assessment data. Over the years, we’ve been at the forefront of customised, contextually-relevant assessment reporting and recently we’ve embarked on a journey to take assessment reporting to the next level. By applying the principles mentioned above, we are enhancing the way assessment results are reported and consumed by end-users.
Our interactive talent match and merit list reporting is undergoing final user testing and we’re always happy to hear from clients and organisations who would be interested in being part of this initial pool of users. If you’re interested in joining us on this exciting path, why not drop us a line at firstname.lastname@example.org?