Measuring Threat Assessment Skills – Evaluating Knowledge Gain

Training has high opportunity costs. Not only does training cost time and money, but it takes away from actually doing the job. In the case of law enforcementand security organizations, it means time away from protecting property and saving lives. Therefore, it is essential that training adds value. If not, what is the point?

This is why we evaluate. To see if training matters. You might recall from our previous blog posts, that the first level of training evaluation is reaction – what trainees thought of their training.

But reaction is subjective; it is based on the feelings or opinions of the trainee. A trainee might have thought their training was awesome – but didn’t learn anything. A trainee that disliked course a may have learned a lot.

The second level of training evaluation is learning – assessing measurable skills that a trainee takes away from a course.

We all know that some trainings are required and “buts in seats” may be the only thing that matters. In other cases, you may want to makes sure your officers learned something.   

About Evaluation and Learning

According to Kirkpatrick (1998), there are four levels of evaluation. Level 1 is “Reaction” or what trainees say about their training experience.  Level 2 is “Learning” and involves measuring if there was an improvement in knowledge, skill, or performance due to training. An evaluation of learning often involves a pre and post-test; when trainees knowledge is measured before and after training.  Learning can be assessed during, immediately after, or long after training is complete.

Our learning objectives should drive what knowledge is being measured; these objectives should have been identified as part of your instructional design process. If the training objectives cannot easily be articulated, figure that out first.

You can measure knowledge gain in several ways. The most common is a test which can come in the form multiple-choice question or seeing if the trainee can complete a task. The test can involve an assessment of knowledge gain (quantity of information learned), retention (what the trainee remembers over time), and skill demonstration (successfully performing the trained upon task). (Alliger et al., 2008)

Self-Reported Learning and Threat Assessment Training

There are subjective and objective measures of learning. In this post, we will focus on a subjective approach and present an objective measure in a future post.

Let’s take a moment and look at Figure 1. Figure 1 includes self-reported questions from the end-of-program assessment of the Identifying Threats training program.

Figure 1: Knowledge Gain in the Threat Assessment Skills

In this assessment, we asked our peace operations trainees to self-report their level of knowledge in certain skills areas before and after the training– in this case we asked them about:

  • Analyzing an environment;

  • Identifying behavior of contraband/weapons carrying; and

  • Identifying a threat in their presence. These questions are on a 1-5 scale (1 being the lowest and five being the highest). Our goal was to capture how much they thought they knew BEFORE the training and how much they think they know AFTER the training.

In this course, we initially had a pre-training assessment, but because we were teaching new concepts, the data we collected wasn’t informative. But we collected a lot of other data too. If you would like to learn more about the Identifying Threat Program, check out our article in the Small Wars Journal.

In Figure 2, is self-reported knowledge gain from our peace operations trainees across five deliveries. They report substantial increases in these three skills – great stuff, right?

Yes and no. It is excellent that trainees report they learned what we were hoping they did. It is a positive indicator, but not a great one because it relies on the student to self-report AFTER the training how much the thought they knew BEFORE the training.  

It is not a measure of actual knowledge gain. Self-reported knowledge gain can tell me if something is wrong. Training should have objectives and if, on average, trainees self-report they didn’t learn what they should have learned, that is a problem that needs to be solved.

Validity can be improved by asking these questions before AND after the training, but it is still not the best measure. It would be even better to actually to measure what they learned. This is possible to do, but it is a little bit more complicated, and you can learn more about this in our blog post about assessing active threat assessment skills.

Putting it All Together

Your training dollars and the time and resources of your training staff are precious. When you buy training, you should not hesitate to ask for a report on how your officers perceived their training and what they learned. For in-house training, your training staff should be able to answer similar questions - but they might need help to develop the tests, analyze what is collected, and revise courses based on these data.

Send us a message if you would like to discuss evaluating learning in your training programs.

  • Alliger, G. M., & Janak, E. A. (1989). Kirkpatrick’s levels of training criteria: thirty years later. Personnel Psychology, 42, 331–342.

    Kirkpatrick, D. (1976/98), Evaluating Training Programs: The Four Levels (San Francisco: Brett-Koehler Publishers).

Previous
Previous

Should All Training Be Online?

Next
Next

Why Experience Matters: Stress Inoculation in Active Threat Assessment