Critical Systems Heuristics (CSH) is a systems thinking tool developed within what is sometimes known as the 'emancipatory' or 'critical' realm of the discipline. As such, it places considerable emphasis on exploring power relationships which influence the system of interest. CSH as a tool was formalised by the Swiss academic Werner Ulrich
, drawing extensively on the work of C. West Churchman
. The tool proposes that the behaviour of any system, including a system for training people, is influenced from four perspectives:
- Motivation, who gets what from the system (trainees, organisation, trainer, customers, etc.).
- Control, who controls how the system operates (managers, trainers, trainees, etc.).
- Knowledge, what the system contains and how this is decided (learner or trainer, for example).
- Legitimacy, those excluded by the system.
For each of these perspectives, we need to think about what the perspective means and who is involved, and also what key issues may be. This 4 x 3 structure is usually represented in a matrix for clarity.
Thirdly, the 12 questions in the matrix are then asked twice, firstly what is the answer, then what should be the answer. This step is crucial, as it exposes the power dynamics influencing how the training has been designed and delivered.
This probably seems quite a complicated process, but the table oppositeshows how the 12 basic questions can be used in a training evaluation.
Why is this useful?
Adopting this critical systems thinking approach can help generate a much deeper evaluation to a training programme. The standard, Kirkpatrick-based, approach is defined very much within the boundaries of the organisation (changes in employee behaviour, impact on performance), whereas this critical approach helps us to look more broadly.
Here are some examples.
- With the benefit of hindsight, were the original design objectives for the programme appropriate? (Motivation)
- Has the programme will be implemented effectively? (Control)
- Was the knowledge content of the programme appropriate? (Knowledge)
- Does the evaluation gather opinions from people who were affected by the training (customers, for example)? (Legitimacy)
Asking these questions has been very useful in evaluations that I have personally conducted. For example:
- When evaluating a training programme aimed at helping improve the skills of people involved in coordinating voluntary sector organisations, I interviewed people from these organisations (outside the conventional boundary of an evaluation).
- When evaluating a management training programme, I questioned the validity and relevance of the set of management skills incorporated within the programme.