The Top 10 Questions: A Guide to Evaluating Place-Based Initiatives

Authors: Dr. Sanjeev Sridharan, St. Michael's Hospital and University of Toronto
Document Type: Policy Brief
Published Date: Saturday, October 1, 2011 - 4:00am
ISBN number: PH4-101/2011E-PDF, 978-1-100-19483-7
Alternative Format: 2011-0087_e.pdf

The following is a "how to" approach to evaluating place-based initiatives. We discuss ten questions that an evaluation of place-based initiatives needs to address, although they could also be useful to inform the evaluation of programs and projects that are not necessarily place-based. The questions are of a reflective rather than a prescriptive nature. They are intended to inform evaluation planning for complex multi-site place-based initiatives.

1. WHAT EVIDENCE INFORMED THE DEVELOPMENT OF THE INITIATIVE?

What are the key uncertainties? How will the evaluation help enhance the evidence base?

The evaluation needs to be explicit about the evidence that informs the initiative as well as the areas of uncertainty, in order to understand the types of communities for which this model is most likely to work. How exactly will the investment in evaluation of the initiative contribute to building an evidence base for future such experiments?

2. WHY BOTHER WITH THE EVALUATION?

What are its multiple purposes?

Evaluations often invoke concern. Smart learning organizations need to constantly communicate the multiple purpose of evaluation to their stakeholders, which can include:

  • clarifying the nature of the activities and how they are expected to achieve outcomes, bringing to the surface the assumptions by which the activities impact short, intermediate and long-term outcomes;
  • helping to define what success means to different stakeholders;
  • examining whether some of these assumptions are being met (note: Not all assumptions are testable);
  • testing if the place-based initiative is "working".

3. WHAT IS THE THEORY OF CHANGE FOR THE PLACE-BASED INITIATIVE?

Will the Theory of Change be different for different communities?

Theories of Change often guide the implementation of place-based approaches, outlining how the initiative is expected to lead to improvements. Most place-based approaches have the freedom to contextualize it to their own community context, but evaluation frameworks frequently treat this heterogeneity as background "noise" that needs to be filtered out. In evaluations, one needs to:

  • be explicit about the mechanisms needed to affect outcomes and capture the ways in which different communities implement these mechanisms differently;
  • define what key terms in the program theory mean for different communities before there is movement to operationalize and measure the concepts (such as community capacity or community social capital). The measurement cart should not be driving the conceptual horse, so conceptualization should respect heterogeneous understandings across communities and stakeholders;
  • be explicit about the key assumptions in the program theory and which of these assumptions will be tested by the evaluation;
  • describe the theory in a way that can aid implementation and conceptualize local aspects;
  • describe a process and strategy for updating the Theory of Change as lessons are learned from the implementation of the initiative.

4. HOW CAN AN ANTICIPATED TIMELINE OF IMPACT AND AN ANTICIPATED TRAJECTORY OF IMPACT HELP?

What is surprising about Theory of Change is that there is rarely an explicit understanding of the timeline of impact. An anticipated timeline of impact can provide information that is not supplied by the logic model, specifically when changes in key outcomes are likely to occur. It is important that this be based on realistic experiences and not just aspirations.

  • Involve stakeholders in developing the anticipated timeline and trajectory of impact. Some initiatives can be anticipated to experience a gradual or sharp improvement of indicators, while others can be anticipated to worsen before they improve.
  • Document disagreement between stakeholders. Use a participant-driven inductive approach like concept mapping to better understand stakeholder views on the timeline of impact and trajectory of impact.

5. HOW DOES THE MONITORING AND EVALUATION DESIGN HELP ASSESS IMPACT?

A fundamental step in the evaluation framework is developing clarity on the design needed to understand if an intervention is "working". A good design can help rule out alternative explanations for changes in key outcomes over time. Elements to consider for the design include:

  • reflection on what successful impact means for an intervention;
  • clarity on the timeline of impact;
  • clear and reliable measures to study the impact of the place-based intervention – the measures need to be informed by the Theory of Change;
  • measures of the dynamic contexts and mechanisms that might be necessary for the intervention to work;
  • Ideally the design needs to integrate both monitoring and evaluation approaches. Monitoring aims to study progress against selected indicators and measures the system indicators progress against targeted goals. Evaluations, on the other hand, study the "why" or "why not" of performance and attempt to provide remedial action if the performance is not up to expectations.

6. HOW WILL THE ANALYSIS OF THE MONITORING AND EVALUATION DATA GENERATE USEFUL INFORMATION?

The evaluation also needs to discuss how the information that is being collected will help individual initiatives adapt their activities based on learning. Clarity about the evaluation methodology including the range of analytical techniques is important to decision making for sustaining programs or informing improvements in individual initiatives.

  • Discuss how the information will be useful to individual initiatives. This builds trust and support for the evaluation and contributes toward better decisions.
  • Think explicitly about how the analysis will be conducted to generate timely information for the initiatives and how the analysis would help inform the decision to continue the program.
  • Develop a coordinated data strategy: it is critical that data on the key performance measures not be collected piece meal, so the evaluation team has a key role. Be clear about what kinds of data are to be collected and for what purpose, as well as who is in charge of the data collection system. Data collection needs to be informed by the theory of change.

7. WHAT WILL GET GENERALIZED AT THE END OF THE EVALUATION?

And how will learning be spread?

Be clear about what learning can be generalized and used to inform the development of other initiatives. Will the evaluation be making recommendations regarding scaling up or replicating the initiative? What kinds of learning will be spread as a result of the evaluation?

  • Dialogue with key stakeholders early in the initiative and also periodically over the initiative to clarify what kinds of learning from the evaluation needs to be "spread."

8. WHAT ARE THE UNINTENDED CONSEQUENCES OF THE INITIATIVE?

Much of the existing literature on place-based initiatives assumes that only good will come of the placed-based initiative. However, there is also a distinct possibility that the place-based initiatives can result in less than favourable impacts.

  • Pay attention to the unintended consequences, as part of the dialogue and reflection between funders and other stakeholders. This is facilitated by considering the mechanisms by which initiatives affects outcomes.

9. THE "SO WHAT"?

How did the place-based initiative improve lives or ecosystems?

Good evaluations are eventually about performance stories – there are few more credible performance stories than those that reflect on how and why investments in place-based initiatives made a difference in the lives of individuals or the state of the ecosystem. This might be outside the immediate sphere of influence of the place-based initiative and also might take a long time.

  • Discuss how lives or ecosystems were impacted.
  • Develop performance stories that describe the various mechanisms that achieved these results.

10. WHAT IS AN EXPLICIT LEARNING STRATEGY TO PERIODICALLY UPDATE THE THEORY OF CHANGE?

An evaluation in itself is unlikely to lead to real learning and improvements in implementation without an explicit learning strategy. Plan for how the place-based initiative can take proactive steps to learn from the information that is gathered, and improve the performance over time. A learning framework can identify the types of learning that are relevant to the initiative.

  • Organizational learning: what organizational structures are needed to support the coordination of programs and policies to support the place-based initiative, and what is the "active ingredient" of the place-based initiative?
  • Process learning: What were the challenges of moving from the strategic plan to implementation, and from implementation to sustainability? What is the "collaborative advantage", if any, and does this change over time?
  • Risk landscape of individuals: How was the intervention used to learn about the multi-level risk and protective factors associated with individual outcomes? Is there evidence that such risks are "malleable" and are impacted by a coordinated partnership approach? Are there demonstrable impacts with individual outcomes?
  • Update the theory of change based on learning. Share what you have learned both within your team and others. Learning can occur through the consideration and reflection on the evidence gathered through monitoring and evaluation efforts. Communicate broader learning to other place-based initiatives that may not be as advanced or influential.

Note: These questions are based on a paper commissioned by Policy Horizons Canada (formerly the Policy Research Initiative), as part of a larger project funded by an inter-departmental committee of place-based practitioners within the Government of Canada. The full paper explores emerging approaches in the evaluation of place-based initiatives, and can be provided upon request by contacting questions@horizons.gc.ca.


2016-08-05