11. Formative evaluation versus program planning - it’s a fine line!

Number 11 in Dr Paul Duignan’s Three Minute Outcome video/article series.


What’s the dividing line between formative evaluation and program planning and design? It’s an interesting question that I’ve thought about a lot when working as an evaluator and an outcomes specialist.

And it’s a question that’s becoming increasingly interesting in any setting where the use of formative evaluation is growing at the same time as program planners are adopting a more holistic ‘design-based’ paradigm.


So what’s formative evaluation?


Formative evaluation is evaluation focused on optimising program implementation. It was developed as evaluators became increasingly frustrated with what they were traditionally being asked to do. They were often being asked to evaluate the outcomes of programs which in many cases obviously could have benefited from more robust initial planning.

In response to badly designed programs, evaluators initially developed the concept of ‘evaluability assessment’. This type of evaluation poses the question: ‘is this program sufficiently well planned and implemented to justify spending money on evaluating its outcomes?’

Expaning on this, evaluators decided that when they identified problems in program planning, they could offer to roll up their sleeves and help out with aspects of program planning. The tools they had developed as evaluators could potentially be of use within program planning. For instance, interviewing participants, stakeholders and staff regarding how a program is progressing and how it could be improved.

When talking about this issue in evaluation training I use an ‘archery analogy’. In this analogy, a program can be thought of as similar to firing an arrow down an archery range towards a target. There’s little point embarking on an expensive impact evaluation as to whether or not a program’s ‘arrows’ have hit the target when you are not even certain that the archer is actually facing in the direction of the target. 

Formative evaluation, therefore, is an attempt by evaluators to move ‘back down’ the program lifecycle from just measuring program outcomes and impacts to the planning end. Evaluators are wanting to do this to make sure that the programs they are being asked to evaluate are well planned and well implemented.

Formative evaluation can be contrasted with another type of evaluation - summative evaluation. Summative evaluation is concerned with working out the overall merit and worth of a program and whether or not it has achieved its outcomes.

Now obviously, if evaluators are going to start focusing on using their evaluation toolkit earlier in the program lifecycle, then there may well end up being some role overlap with others. This role overlap will be between what evaluator are doing, or what to do, and what program designers are doing, or want to do.


The new ‘design-based’ emphasis in program planning


There is an additional factor adding to the complexity of the interaction between program planners and formative evaluators. These days some program planners are adopting a more holistic ‘design-based’ approach to the program planning task. From this perspective, planning is seen as a comprehensive holistic design activity. This means that design-based planning can include areas that overlap with the type of things that formative evaluators are now interested in getting involved in. 

For instance, ‘design-based’ program planners are likely to be undertaking: stakeholder consultation, prototyping, co-design, identifying outcomes and working out how to measure these outcomes. 

So given these two trends - evaluators moving down the program lifecycle towards the earlier stages of program planning and program designers widening their activity in a more holistic way, it is unsurprising that areas of potential overlap between these two groups will be growing.


Is there anything we can do to assit with role clarity?


Is there any way that we can help clarity roles to smooth interactions between formative evaluators and program designers so that everyone can add value to the process of getting better programs implemented?

A second analogy I use when training evaluators and program designers is to view the job of formative evaluator as similar to the role that an automobile mechanic plays in some countries. 

In some places, the way the country’s automobile ‘Warrent of Fitness’ system works is that you take your car to a mechanic. The mechanic then undertakes two conceptually different tasks. The first role can be seen as an assess role. First they will assess whether or not your car meets the Warrent of Fitness requirements - is it free from any deficiencies that would make it unsafe to drive? However, once they have undertaken this assessment, the mechanic can then offer to move into a second role - this one is an assist role. In this second role they will offer to fix some, or all, of the problems that they identified when they were working in the first assess mode.

If we think in these terms, then the formative evaluator’s first role is clear - this is the assess role. It is the second role - the assist role - where there is likely to be role conflict with program designers. 

In their assess role, formative evaluators will check for things such as: whether or not a culturally appropriate approach is being used in the program; whether there has been sufficient consultation with stakeholders; whether the program is drawing on evidence-based practice; whether it has developed an easily communicated intervention logic, theory of change or strategy model; and whether program planning has clearly sorted out what the program’s outcomes are, how these will be measured and what impact evaluation designs should be used to attribute improvements in outcomes to the program itself.

It is clear that all of these are appropriate tasks for formative evaluators to undertake working in their first, assess role. But as discussed above, once they have completed this assessment they can then offer to work to assist with any of these above areas in which things could potentially be improved. For instance, because of their experience in developing intervention logics, theories of change and strategy models, evaluators can assist in building these.

Depending on the relationship between program designers and formative evaluators, responsibility for the decision as to who should do what will lie differently. If the program designers are, in effect, employing the formative evaluators, it will just be up to the program designers to decide on what role the formative evaluators will play in an assist mode.

But in the more common situation where both the program designers and formative evaluators are employed by a funder or other third party, it is likely that the funder will decide who should do what. 

For instance, if they think that the formative evaluators have particular skills that would complement program designers they may want the evaluators to use them. In such a case they would likely insist that the formative evaluators work alongside the program designers in both their initial assessment mode but, following that, that they also work in an assist mode to help out where they can add particular value.


Check out more articles/videos on strategy, outcomes and evaluating impact in Dr Paul Duignan’s Three Minute Outcome video/article series.

© Parker Duignan 2013-2019. Parker Duignan is a trading name of The Ideas Web Ltd.