Training Tips for Ships #15: Using Student Exam Results to Measure OUR Performance
Exams are a staple of training. We know what they are, we know what they are for, and we know how to write and deliver them. And, of course, we all use them to test trainee knowledge. But there is another benefit of exams that the vast majority of maritime trainers are ignoring. And since we give and grade exams all the time, this benefit is already there for the taking!
The value I am referring to is that of providing outstanding, actionable feedback on how effective our training is, organization-wide. Exams are normally used to measure an individual trainee’s knowledge. But the same data we generate for that purpose can be used as an outstanding indicator of overall training program success. It is also a leading indicator of organizational performance and safety - if we just examine the data a little differently. So - how do we do this?
There are many ways that our exam data can provide valuable insights. In general, all of these analyses are greatly facilitated by having the learners perform their exams in an LMS, as some LMSs produce the insights automatically for you. If your learners are doing their exams on paper, the various analyses could be done, but it would be a largely manual task. Let’s look at what is possible with the help of technology.
One of the most useful analyses we can run on exam results is to group all of the questions by the competency they cover, and then look at average performance for the group of questions covering that competency. As it stands now for most organizations, we only monitor the average performance of an exam as a whole. Thus, if (for example) the new deckhand exam is being performed with an average score of 80%, this might give us confidence that the concepts are well learned. This is false security. Within that exam it may be that there is a set of questions that cover a particular competency which are routinely being failed.
As an example, we could identify all of the firefighting-related questions in the exam (or across all exams) and look at how well they are performed as a group. If we discover that these questions are poorly performed, even within the context of an exam that is well performed overall, we have identified a serious risk that we can now work to eliminate. This would have been completely hidden when looking at average scores on exams in general, but becomes immediately apparent if we perform a by-competency analysis of question performance.
A second useful analysis we can perform on multiple-choice questions is to generate statistics which indicate, for each question, how often each of the potential answer selections is chosen. For example, on a particular question this might tell us that the correct answer is chosen 60% of the time, that the first incorrect answer is chosen 35% of the time, and that the remaining two incorrect answers are rarely chosen. This is useful for many reasons, but the most critical use of this data is to identify common misperceptions. In the example above, one of the incorrect answers is mistakenly believed to be correct by roughly ⅓ of the students who have answered that question. If that common misperception creates a potential safety issue, having access to per-question statistics of this sort reveals that safety risk and allows the organization to quickly correct the misperception before the risk manifests itself as an accident.
So - what risks are hiding in the data we already have? As indicated, if we deliver exams within an LMS, some of this information may be readily available. If not, one might consider some form of on-line exam delivery to begin the process of collecting and utilizing this important data.
These two examples only touch the surface of the insights that can be derived from the exam data we have. We will continue with other useful exam-derived insights in the next Training Tips for Ships. Until then, stay healthy and sail safe!