I am pretty sure that almost all of the terms of reference I have seen for training evaluation projects have said that the training needs to be evaluated against the four levels of reaction, learning, behaviour and results; in other words, our old friend Kirkpatrick’s framework. The great man himself passed away several years ago, but I am sure he would have been delighted to have created a structure which, more than 60 years after he developed it for his Ph.D. thesis, remains the standard for training evaluation.
And yet… although it is the standard, there is an ongoing discussion within the training world about how difficult it is to evaluate training and that, in practice, the four level framework does not really tell us how to evaluate training. Maybe that is asking too much of what is, in essence, a taxonomy, a classification of what we can evaluate. Kirkpatrick’s framework is like a trainspotter’s catalogue of engine numbers: essential if you want to know what you are looking at, but of little value in making sure the trains to run on time.
Of course, a lot of the problem is that we treat Kirkpatrick’s framework as if it were a model about how training works rather than just a framework. This linear theory of change summarises what this model proposes.
Granted, it does look very plausible: if we enjoy a training experience we will learn, and then apply the new knowledge and skills, our work performance will improve and our organisation will benefit. The problem is, and this is borne out by extensive research, is that this is just not necessarily true. We learn from unpleasant experiences as well as pleasant ones, perhaps even more so? Just because we learn does not mean that we apply new knowledge and skills: our workplace environment may stop this from happening. Our performance may improve not because of what we have learnt but because we enjoyed some time away from the routine of everyday working life. And, of course, a myriad of factors other than improved knowledge, skills and attitudes may have improved organisational performance. The model is too simplistic, and ignores external factors which affect learning, application and organisational performance, all of which can be much more significant influences than training. Kirkpatrick’s own writings  do acknowledge the existence of these external factors but do not address them in any comprehensive way.
So the training evaluator setting forth to satisfy the terms of reference armed only with Kirkpatrick’s framework, is not actually very well equipped. Now, there are many guides to training evaluation available, but few approach training from a systems thinking perspective, even though this is a methodology which I have found in my own professional practice to be extremely powerful. But what do we actually mean by a systems thinking perspective?
We can look at three main principles underpinning a systems thinking approach: a questioning of boundaries, exploring different perspectives and investigating relationships within and outside the system of interest. Let us explore each of these in more detail, defining the training intervention as ‘a system to improve organisational performance by increasing levels of knowledge and skill’.
First, we question the boundaries. A boundary can be many things, but in essence is something which includes and excludes. In a training system this covers many things. For example, in our theory of change above we have included the training activity as contributing to individual and organisational performance but we have excluded consideration of other relevant factors, such as the climate for transferring learning and external factors constraining performance. The boundary can be a decision about who receives training and who does not; who makes the decision about who is trained and what they learn and who does not make the decision; what is included in the training materials and what is left out. These are all extremely important questions which should have been determined by a training needs analysis, but such analyses are often not conducted in any rigorous or systematic way, which effectively means that the boundaries embodied by the terms of reference can be to some extent quite arbitrary.
Secondly, what different perspectives are there about the training? In most cases organisational training is implemented in order to improve a complex situation. For example, improving the level of sales is not just simply a matter of improving sales skills: what can be sold depends on what customers want to buy, what competitors are offering, on local and national economic factors, and so on. Should we consider the perspective of people outside the organisation? Different people within the organisation may have very different ideas about what is ‘the real problem’: is it a lack of sales skills, is it not understanding what competitors are offering, is it not understanding how the marketplace is evolving, and so on. Adopting a systems thinking approach helps the evaluator to question whether all valid perspectives about the situation are being considered.
Thirdly, relationships. These may be relationships between individuals and groups within the organisation or between these individuals and groups and entities outside the organisation, within its environment. The relationship may be between the training and time: does the situation which stimulated the demand for training still exist in the same form, or has it changed to make things better or worse? How has the training changed the relationships between participants and non-participants? Have unexpected relationships developed which could have a positive or negative impact? Finally, does the training ignore relationships? Most work that people do is as part of a team, and yet often training is aimed at individuals, running the risk of ignoring the importance of team interactions.
Of course, an experienced training evaluator may be asking these questions already. But for someone starting out in training evaluation following a systems thinking approach can provide an extremely valuable structure within which to work. There are a number of systems thinking methodologies which are appropriate to training evaluation, and although it is beyond the scope of this article to look at these in detail, it would be useful to describe briefly some of them so that, if you are interested, you can research further.
Soft Systems Methodology (SSM) is a tool which is very useful for exploring different perspectives about a situation. It makes it possible to consider the implications of different world views about an organisational problem, and what actions need to be taken in order to improve matters.
Critical Systems Heuristics (CSH) is a tool for exploring boundaries, and considering how different power relationships lead to decisions being made about what or who to include or exclude.
The Viable Systems Model (VSM) is a cybernetic approach which looks at the relationships that are needed between different functions within an organisation if things are to work smoothly. By comparing and contrasting an ideal model with what the training is doing, it is possible to identify its strengths and weaknesses.
This article has tried to explain how systems thinking can contribute to improved training evaluation. It is not a replacement for Kirkpatrick’s framework or for the enhancements which have been made over the years, such as Phillips’ Return on Investment or Brinkerhoff’s Success Case Method. What systems thinking approaches can do is to provide a structured way within which these ideas can be explored, and sense made of what research shows is happening. It also ensures that we see training as just one of the factors influencing what people do and how well they do it, helping us to design training which works with the complex dynamics in any professional arena.
[This post was also published as a LinkedIn article]
 For example, "Evaluating Training Programs: The Four Levels", Kirkpatrick, D.L., 1994, Berrett-Koehler.