Strategic evaluation

[This page is currently being edited]


A paradigm shift in evaluation thinking


Duignan’s strategy evaluation (Duignan: 1997, 2004) approach represents a shift in evaluation thinking away from just focusing on evaluating individual programs to thinking in terms of meeting the strategic knowledge needs of an organization, sector, region or country. Of course, evaluating individual programs is still part of the strategic evaluation approach, but only when the programs being evaluated have been identified as answering high-priority strategic questions for the for the organization, sector, region or country. 

The demands of making the conceptual shift to a strategic evaluation perspective should not be underestimated. It is best seen as a multi-year process involving changes in: stakeholders’ thinking; evaluation’s implicit theoretical underpinnings; organizational and institutional culture; organizational values; governance; strategy formulation; resource allocation; professional and sector capacity building and evaluators’ on-the-ground practice. 

The strategic evaluation approach is particularly useful for anyone working at an organizational or sector level who is charged with evaluating organisational-wide, sector-wide, region-wide, country-wide or even multi-country-wide activity composed of a number of different individual programs. 




'Evaluation should focus on meeting the strategic knowledge needs of an organization/sector not just evaluating a program' 




Currently much evaluation planning starts from the point of view of working out how to ‘evaluate a program’. In fact, the evaluation discipline is often referred to as ‘program evaluation’. Strategic evaluation turns this orientation on its head and starts from the point of view of attempting to identify the strategic knowledge needs of the organisation, sector, region or country in which any particular program is embedded. Once these knowledge needs are identified, one can work back from them and determine what type of evaluation is appropriate for each individual program. Not all programs merit the same level of evaluation effort. Programs following well established intervention types which have been evaluated well in the past are usually a lower priority than more innovative and experimental programs.



Basic toolkit needed to start doing strategic evaluation work


Before we turn to looking in detail at what we are trying to do in strategic evaluation, it is worthwhile itemizing the basic toolkit which is needed for working successfully within a strategic evaluation approach. This involves a mix of conceptual and practical tools.

First, from a conceptual point of view for strategic evaluation we need a theoretical approach which does not see evaluation as isolated from other organizational, sector, regional and country-level tasks. These tasks include: outcomes identification; strategic planning, prioritization, indicator monitoring, performance management, commissioning, delegation and contracting. This is why strategic evaluation is best viewed as a sub-set of outcomes theory - the theory that deals in an integrated and cross-disciplinary way with identifying and acting to achieve outcomes of any type in any sector (Duignan, 2009). (More information on outcomes theory is available from the simple overview of outcomes theory). 

Second, for strategic evaluation we need an approach to evaluation which does not just focus exclusively on any one single ‘type’ of evaluation. We need to be open to using: implementation (developmental and formative), process, impact and summative evaluation as, and when, appropriate. Evaluation needs to be conceptualized and ideally evaluation funding provided in a way that allows you to potentially allocate evaluation resources at the appropriate point in the life-cycle of a program. The type of evaluation to use will depend on the particular developmental stage of a program interacting with the priority sector knowledge needs of the sector in which the program is located. This is in contrast to evaluation funding just being ring-fenced, as it sometimes is, to a particular type of  evaluation. For instance in some instances evaluation funding is just available for impact evaluation but not for formative evaluation (optimizing a program’s implementation).

Third, for strategic evaluation we ideally need a standardized way in which evaluations of individual programs are undertaken (where such program evaluation has been prioritized within the strategic evaluation approach). If information is going to be fed up to an organization or sector level, there should be some consistency in the way that such information is collected. This is in contrast to program-level evlauations being been done in a wide range of different somewhat idiosyncratic ways.

Fourth, given that thinking about organizational or sector knowledge needs is central to the strategic evaluation approach, the approach requires a way of coordinating, integrating and representing organizational or sector strategy in an integrated, scalable and accessible fashion. This is necessary because thinking about possible strategic direction is a prerequisite for identifying strategic knowledge needs for an organization, sector, region or country.

Fifth, using a strategic evaluation approach we need to be able to aggregate previous learnings relevant of what is being attempted in the organizational, sector or country-level strategy. 


 need other tools for knowlege management around aggregating, summarizing, storing and extracting previous findings about ‘what works’ from research and evaluation and current organizational, sector and regional intelligence. As part of this it is helpful to have a standardise

Seventh, there also needs to be evaluation capacity-building at all levels - from a high-level overview of evaluation provided for governance, management and stakeholders right down to enhancing specific technical skills for those doing evaluation work. 

Eight, strategic evaluation is not an evaluation approach that is in competition with other evaluation approaches. Its key feature is changing the viewpoint in how we think about evaluation from seeing it as a program-centeric activity to making it an organizational, sector or even country-level activity. It draws on, and is consistent with aspects of a number of other evaluation approaches ranging from utilization-focused evaluation to indigeneous evaluation approaches.

The last four points above are elaborated below.



Coordinating, integrating and representing strategy


As noted above, if we are going to use the strategic evaluation approach within an organization, sector, region or country, we are going to need to integrate and collate the underlying strategy that is being pursued. 

Often we will be working in organizations, sectors, regions or countries where strategy has not yet been integrated or has only been integrated in a partial, or fragmented manner.

When deploying a strategic evaluation approach, one of the first steps is to bring together, as best we can, the underlying strategic underpinning of the organization, sector, region or country we are focusing on. Often ’strategy' will consist of different snippets of strategy in different formats in different places. This is in addition to that which is contained within any available formal strategic planning documentation. 

One way that strategy can be collated and integrated is to use a tool we have developed specifically for this purpose. This is visual strategy modeling. This approach consisted of bringing together a group of people in a workshop. They bring to the workshop all of the strategic documentation they can locate regarding their organization, sector etc. This may be in various formats, for example: one or more strategic plans; outcomes lists; targets; vision and mission statements; indicator lists; outputs lists; literature reviews etc.

A visual outcomes modeling process is then used to build a visual strategy model summarizing the underpinning strategy that is being pursued by the organization or sector. A visual strategy model shows all of the high-level outcomes being sought and the steps it is believed need to occur so as to achieve them. Using a tool such as this means that you can quickly develop an integrated strategy model. This then lets you have both a helicopter and drill-down view of your overall strategy direction that you can use to work out your key strategic information needs as the basis for the strategic evaluation approach.




Aggregating, summarizing, storing and accessing current organizational, sector, regional or country knowledge


Strategic evaluation requires effective knowledge management of research and evaluation findings plus organization, sector or regional intelligence. This information needs to be aggregated, stored, summarised and kept up-to-date. It then needs to be provided in an accessible format for those who are involved in deliberating on what organizational, sector or country strategic knowledge needs are. This requires knowledge management databases and ways of interfacting them.


[One tool that Parker Duignan Consulting uses in this context, in addition to knowledge management databases, is to map previous research and evaluation evidence and sector and regional intelligence directly onto a DoView Strategy Model or similar visual outcomes model and then use this as the basis for discussions with stakeholders. They can use it to identify priority new knowledge needs by referring to what is already known already visualized onto the visual strategy model].





Evaluation capacity building at all levels 


Evaluation capacity needs to be built at all levels of the organization, sector or region in which the strategic evaluation approach is being applied. This will involve developing a common language for describing evaluation so that parties do not ‘talk past each other’ when discussing evaluation. This language for evaluation needs to be rich enough so that it allows for resources to be allocated to whatever type of evaluation (formative, summative, process and impact evaluation) is needed in order to answer a strategic knowledge need.

An essential part of strategic evaluation is to create realistic expectations amongst stakeholders about what is, and what is not, appropriate, feasible, affordable and credible in terms of impact evaluation. If expectations are unrealistic the result can be that evaluation resources can be missallocated and wasted by just being focused on individual impact evaluations of individual programs without considering the priority of that particular evaluation spend. This is not to diminish the importance of individual program impact evaluations, just to say that they should always be undertaken with an awareness of the current strategic knowledge needs of the sector in whcih the program is being run. 

[The tool that Parker Duignan Counsulting sometimes uses to introduce realism into stakeholders’ expectations regarding evaluation is Duignan’s Impact Evaluation Feasibility Check (Duignan, 2008). This is a tool which provides a standard framework of seven possible impact evaluation designs. Each of them is assessed for its appropriateness, feasibility, affordability and credibility. 




apacity buildi


Ongoing organizational, sector, regional or country-wide dialogue 


Ideally a strategic evaluation approach will seek to have in place regular setting (workshop, meeting, consultation) where parties from organizations, sectors, regions or countries can have input into identifying what are current strategic knowledge needs and have information reported back on findings. Typically this consists of periodic meetings, workshops, two-way communication and similar consultation processes etc. 




Having evaluation models which can deal efficiently with 'distributed interventions'


One consequence of strategic evaluation’s shift from ‘program-centric’ thinking to thinking from an organization, sector, region or country-wide perspective, is that one starts thinking in terms of ‘distributed interventions’. This is in contrast to continuing to view interventions as just a series of separate programs. 

It is often the case that a central agency of some sort is wanting a large number of interventions which are focused on similar outcomes to be implemented in a range of different organisations, sites or localities. 

What is the most efficient approach to evaluating such muti-site initiatives? It is costly to, for instance, attempt impact evaluation at all of the sites. From a strategic evaluation perspective it is useful to have an approach for undertaking evaluation cost-effectively in such situations. 

[One tool sometimes used by Parker Duignan Consulting in this situation is the Group Action Planning Approach. This brings together representatives from each of the sites right at the beginning of the process and they work together to plan, implement and evaluate the overall program and their work at each site]. 




Relationship between strategic evaluation and other evaluation approaches

 There are a number of different evaluation approaches (Duignan, 2003), these include: systems thinking evaluation; utilization-focused evaluation; empowerment evaluation; stakeholder-based evaluation; realist evaluation; goal-free evaluation; naturalistic or 4th generation evaluation; theory-based evaluation; and indigenous evaluation (e.g. Kaupapa Maori evaluation). Strategic evaluation has elements in common with a number of these approaches as is described below.

In terms of its relationship to systems thinking evaluation, strategic evaluation takes a broad systems approach in the sense that it focuses on the wider system in which a particular program is located rather than just limiting thinking to the evaluation of a specific program. However systems thinking is a discipline in its own right, with its own formal language and concepts. Any aspects of systems thinking in this formal sense can be used within a strategic evaluation approach. However strategic evaluation is not limited to, or tied to, just using systems thinking technical language and concepts. 

Along with utilization-focused evaluation, strategic evaluation is preoccupied with the users of evaluation. Strategic evaluation is very much in the spirit of utilization-focused evaluation in that it is focused on evaluation being driven by how people will use evaluation results. In practice, the utilization-focused evaluation approach is often used when focused on evaluating a specific program and it can be used within strategic evaluation once it has been determined that a particular program is a high-priority for evaluation. 

In the same way, stakeholder-based evaluation focuses on stakeholder information needs and has a lot in common with strategic evaluation. It again is often focused on evaluating a specific program and can be used within the strategic evaluation approach once a particular program has been identified as high-priority for evaluation. 

Realist evaluation is an approach to evaluation that puts an emphasis on finding out; ‘what works, to what extent, for who, in what contexts, and how?’ From the point of view of strategic evaluation, having this level of granularity in evaluation findings is a great ideal to strive for. Realist evaluation also moves the emphasis from evaluating ‘Program X’ to evaluating the components inside it. Strategic evaluation can be seen as always attempting to identify what are the priorities from the large ‘shopping list’ of what could potentially be found out if one had the resources to adopt a fully realist evaluation approach.

Goal-free evaluation is an approach which starts from the proposition that specific programs may not have captured the outcomes they should be ‘really’ seeking related to their set of stated goals or the domain in which they are working. It is then a matter of working out what the program’s ‘real’ goals should be. This approach has a lot in common with strategic evaluation in that in order to do this one needs to look at a sector-wide perspective on what are the high-level outcomes that are being sought by a program. You then work back from this wider perspective to define the ‘real’ goals against which a program should be being evaluated. Within strategic evaluation this is done by developing a model of what it is that an organization or sector is seeking by its activity. From this overall organizational or sector model one can work back to identify the role of a particular program in achieving outcomes and evalute whether it is, or is not, doing so.

Naturalistic or 4th Generation evaluation is an evaluation approach with a particular philosophy of science about methodology (constructivist rather than positivist) and an emphasis on qualitiative approaches. Strategic evaluation is open to the selection of methods to suit the evaluation task and methods from Naturalistic or 4th Generation evaluation can be used with it.

Theory-based evaluation puts an emphasis on looking at the wider context in which a program is being implemented and emphasises the importance of surfacing program theory - the theory of change that it is believed will lead to improvements in outcomes. Both of these perspectives are consistent with strategic evaluation. 

Indigenous evaluation (e.g. Kaupapa Maori evaluation) takes a values, philosophy of science and cultural perspective based on indigenous views of the world. As a consequence, it looks at the context in which programs take place, for instance: power; resource allocation; institutional racism; and indigenous sovereignty. Taking into account these sorts of wider perspectives is the type of thing the strategic evaluation approach seeks to encourage. The key question indigenous evaluation raises when using the strategic evaluation approach is: ‘whose knowledge needs are priorities for being answered?’

Summary

The strategic evaluation approach has much to offer to those who are working from an organization, sector, region or country-wide level. This is because its view-point starts from such a level rather than just focusing on the lower-level question issue of evaluating individual programs.





Workshops and mentoring on the strategic evaluation approach

We run tailored, face-to-face and online workshops on using the strategic evaluation approach for organisations and sectors. These build skills in using the approach and use live examples from the organization or sector to link learning directly to the task at hand. More. We also provide mentoring for professionals involved in evaluation, outcomes and strategy work and draw on the strategic evaluation approach in this mentoring work.




Duignan, P. (2009). Using Outcomes Theory to Solve Important Conceptual and Practical Problems in Evaluation, Monitoring and Performance Management Systems. American Evaluation Association Conference 2009, Orlando, Florida, 11-14 November 2009.

Duignan, P. (2008). Encouraging Better Evaluation Design and Use Through a Standardized Approach to Evaluation Planning and Implementation8th European Evaluation Society Conference , Lisbon, October 2008. 

Duignan, P. (2004).  Outline of the Strategic Evaluation Approach.  Presentation to the Annual Conference of the American Evaluation Association,  Atlanta, Georgia, USA.   3-6 November 2004.

Duignan, P. (2003). Approaches and terminology in programme and policy evaluation. In Evaluating Policy and Practice: A New Zealand Reader. N. Lunt, C. Davidson and K. McKegg. Auckland, Pearson Education.

Duignan, P. (1997). Evaluating health promotion: the Strategic Evaluation Framework, D.Phil., University of Waikato, Hamilton.


 



Please contact us now if you have any questions about the strategic evaluation approach or any other aspects of our outcomes and evaluation work

© Parker Duignan 2013-2018. Parker Duignan is a trading name of The Ideas Web Ltd.