Evaluating training - Bryan Hopkins

Go to content

Main menu:

Methodologies
Evaluating training

Evaluating training is often seen as a problematic activity. How do you link what happens in a training event with changes in the workplace and effects on overall organisational performance?

One of the problems lies with the inadequacy of existing tools. The industry standard for training evaluation, the Kirkpatrick four-level framework is really just that, a framework, and does not provide an easy-to-use set of tools. There are also some practical issues with the assumptions underlying the framework: does enjoying training really lead to learning; does learning lead to a change in performance? A major problem with using the Kirkpatrick framework for evaluation is that it asks you to work backwards: we look at a change in performance and then try to see if training caused this. In reality, this is impossible, as we can never separate out all of the other factors which may have contributed to the change.

A systems thinking approach to training evaluation provides a completely different way to evaluate training.
By asking questions about each of the lettered stages in the theory of change, and about the connections between each of them (Does A lead to B, and so on?), we can draw conclusions about the contribution that training makes to changes in behaviour and organisational impact.
We start by examining (and establishing if necessary) the theory of change underlying the delivery of training. Through using an adapted form of Soft Systems Methodology we query this theory of change and compare it with reality. By doing this we can draw out the various practical issues which can affect the effectiveness of the training and also identify other factors which may be contributing to changes in performance.

From this we can then make some observations as to what contribution training may be making to a change in performance and impact on the organisation, as well as being able to identify factors which are inhibiting this.

The big advantage of this is that we are working forwards all the time in an informed way, rather than looking backwards and having to make some guesses as to causality. We can then make observations about the following questions:
  • Is what the training covers being learnt?
  • Is what the training covers the right content for the training?
  • Is the overall concept for the training the right concept?

Systemic approaches to training evaluation have considerable advantages over traditional methods. They do not make any assumptions about causality (as in positive reaction leads to learning which leads to improve performance which leads to organisational impact). They recognise that workplace performance is affected by a multitude of factors, and that changes in performance have unintended consequences, which can also be evaluated. These unintended consequences can be much more significant than changes in knowledge, skills or attitudes which come as a result of the training. Systemic approaches also allow for a more dynamic perspective on change: there will have been a time delay between delivery of the training and evaluation, and maybe the training is no longer relevant for the current operational context.

You may be interested in finding out more about some specific systems thinking tools and how they can be used in training evaluation:
  • Critical Systems Heuristics, to help identify what the scope of the evaluation should be.
  • Soft Systems Methodology, to help us reflect on the differences between what we would like the training to be achieving and what it is actually achieving.

You can find out more about systemic approaches to training evaluation in my book, "Learning and Performance: A Systemic Model for Analysing Needs and Evaluating Training".


 
Copyright 2015. All rights reserved.
Back to content | Back to main menu