Home Working Whole Systems Publications Research Contacts

Dialogue between centre and periphery

Dialogue between a Government Department and Service Delivery

 

We were asked to work with a Department, who wanted to support service improvement.  We had identified Whole System Simulation Modelling as a device to support genuine dialogue between people whose attention is principally focused on national policy, or local strategy, or service delivery.  These models are hybrids that simulate the flows that arise from the interplay of the decision rules of semi-autonomous agents (such as front-line staff, and people who use services) with the capacities available in the system.

We established core groups in 3 areas of the country, and they each invited in a wider group of 40-50 stakeholders to a system mapping workshop.  The purpose of this was not to optimise a care pathway but to acknowledge the complexity of the environment in which participants were working, and to catalyse behavioural change.  These workshops were productive in their own right, leading to improved relationships and acting as a trigger to further focused interventions.

At these workshops we showed a demonstration Whole System Simulation Model.   Clinicians and front-line staff were fascinated because they could recognise how their decision-making principles could be incorporated in the model. Managers recognised its potential value in making option appraisal more transparent – and in linking aggregate indicators of activity and resource use with clinical behaviour and patient choice.  We could also see the way in which such local models could act as a ‘policy laboratory’ for the department to explore the system-wide consequences of proposed policy initiatives.

Dialogue between Funders, Providers, Users and Evaluators.

 

A few years ago we had experience of delivering a national development programme. We  became convinced of the value of regular interaction between the delivery team and the evaluators. This allowed evaluation to be formative as well as summative, contributing to improving the way the programme was delivered.  The evaluators also worked directly with local people who were the ‘users’ of the programme, to facilitate them to be self-evaluative and to provide feedback to the delivery team.  What was missing was a way of engaging the funders of the programme other than through the written reports and formal presentations.

Evaluation is a process that consists of (a) gathering evidence and (b) making evaluative judgements.  Its purpose is to link the evidence and judgements with action – with choices about how, in the future, to go about achieving the purposes of the activity subject to evaluation.

 

There is a wide range of evaluation methodologies.  They differ in their approaches to the gathering of evidence, the making of evaluative judgements, and the link with action – particularly in who is considered to be responsible for each of these.

 

We were recently commissioned to evaluate a national programme.  Our mental model, as evaluators, was to think of the stakeholders (participants, implementation team, funders and evaluators) as agents in a complex adaptive system. This leads us to anticipate that different people see things differently (e.g. what counts as evidence, what criteria are appropriate for making judgements).  And to anticipate that significant choices about what happens after the programme will be made by all the stakeholders – not just by the funders or potential funders of similar programmes.

 

As evaluators we observed the programme; engaged in formative debriefing meetings with the delivery team; provided regular written reports; and interviewed many of the participants.

 

The final workshop was highly innovative in that it successfully engaged the participants, delivery team and funders in evaluative dialogue. We presented evidence we had gathered. Participants, delivery team and funders engaged in small group and large group dialogue about their own experience of the initiative and the evidence of the others.  This generated more evidence and a range of recommendations.  The workshop concluded with dialogue about the evaluative judgement – was this initiative worth the time, resources and effort you invested?

 

This Whole System approach to evaluation provided a rare opportunity for participants in a programme to talk with (rather than simply feed back to) an implementation team; and an even rarer opportunity for the funders of a programme to be able to talk with the participants and implementation team, rather than just receive an evaluation report.

Back to list of examples

Home Working Whole Systems Publications Research Contacts