Location, Location, Location: Putting Evaluation in "Place"
All links were valid as of date of publication.
What’s in a Word: PLACE-BASED
Place-based approaches use local actors, knowledge and resources to provide coordinated, locally-relevant responses to issues that are seen to be too complex and long-term to have simple solutions, implemented by a lone actor. Examples include neighbourhood poverty reduction, eco-system protection and public health promotion strategies.
As a community, Prince George is different than Brampton, which is different than St. John’s. Place-based approaches are bottom-up interventions that acknowledge the impact local realities can have on program effectiveness. These are interventions that use multi-sectoral collaboration to tackle complex issues, and they take place in disparate communities across the country.
Determining which interventions work best, under which circumstances, and why, is fundamental to forming effective policy at the local and national level. Policy Horizons Canada responded to this need by launching a project on the evaluation of place-based approaches from a national government perspective. Having explored the characteristics of place-based approaches, and the evaluation challenges associated with these characteristics (see discovery paper and briefing note), the project sought to dig deeper, and commissioned several papers from experts in the evaluation field. The goals of the project included exploring the evidence base that has been established around place-based approaches; exploring stakeholder experiences with evaluation processes; and identifying innovative evaluation methods and tools. While not intended to be all-encompassing, these papers focused primarily on place-based initiatives with federal government involvement on a number of policy domains, representing the key pillars of sustainability.
The following are key findings from each of the papers, but the papers are available on request.
Common Themes: This App Would Like to Access Your Location Data
Our society is shifting – personalized services and user-generated content is emerging all around us, and government policies are being pulled in new directions that reflect these changing expectations. Increasingly the federal government is using multi-level, multi-sector partnerships to create tailored programs and to enhance its ability to address policy issues that cross mandates. However, traditional evaluation approaches have a hard time reconciling decentralized objectives within a national framework. Evaluations have documented successful outcomes in place-based initiatives; however, documenting results have been made more difficult by an accountability-based evaluation orientation. Evaluations studied typically focused on pre-determined program outcomes, and saw these adaptations as irregularities that need to be sifted out in order to assess the program logic. As a result, key local objectives were not measured. Longer-term outcomes and so-called “soft” outputs like capacity building were also often left out of evaluations. As a result, it is argued that place-based policy is being driven by what can be easily measured, a situation that is said to be limiting creativity in policy and program design, as well as having implications for self-determination for First Nations communities.
Evaluations are instruments of both accountability and policy design. Finding evaluation approaches that enable place-based initiatives is an important step in achieving the policy objectives that led to their adoption in the first place: policy flexibility to adapt to local context; collaborative process to co-ordinate across silos; and local empowerment to enable innovation and experimentation.
Exploring the Effectiveness of Place-based Program Evaluation
Meyer Burstein and Erin Tolley
This report explores the evaluation of nine-placed based programs that have involved the federal government as a key funder or partners. It is based on series of interviews with evaluators, federal program officers, and community representatives.
Within the evaluations studied, the authors identify a pattern of two approaches: one focused on accountability requirements and one focused on best practices. Programs that engaged local partners, but where the federal government was the lead partner or funder, tended to use summative evaluations that were conducted by government actors and were much less likely to incorporate on-going evaluation or an orientation toward best-practices. Programs with multiple partners and collaborative decision-making processes emphasized program improvement and on-going evaluation. Evaluations tended to be conducted by evaluators external to government and included more alternative methods like developmental evaluation, action research and network analysis.
Federal and community evaluators indicated that they didn’t have the data required to evaluate the programs, while place-based practitioners noted that the federal government frequently required summative evaluations before outcomes could be realized. Although the reports find a number of promising practices, it suggests that the impact of place-based interventions is not best captured by traditional assessment tools. Rather, they demand new evaluation instruments, methods, and data that are calibrated to capture so-called “soft” outputs and longer-term outcomes. The review suggests that evaluation be taken into account early on and be integrated into the program throughout its lifespan.
Place-based Evaluation in a First National Context: Something Old, Nothing New, Often Borrowed, and Frequently Blue
Robert Shepherd, PhD
Robert Shepherd examines Canadian government approaches to the evaluation of place-based initiatives through the lens of First Nations economic development. The paper argues that evaluations on First Nations programs evaluations tend to focus at the level of the national program. Local participation was seen to be limited to consulting on appropriate data collection techniques to address national research concerns, while local learning, results and objectives were left largely unexplored.
The paper notes a new willingness on the part of departments to focus evaluations on understanding differences in First Nations communities. The report suggests the incorporation of community-level needs into designs and research questions to enable learning that is relevant and usable at the local level, resulting in more reliable data collection and improved data quality. It is suggested that national program evaluations would be able to draw conclusions on the successful adaptation of projects and local results, rather than focusing on a roll-up of national indicators. This would represent a significant shift in what is considered traditionally the subject of the evaluation (the evaluand), and which results are considered important to document in accountability-based evaluation.
Evaluation of Integrated Management Initiatives
Livia Bizikova, Darren Swanson and Dimple Roy
This paper examines the evaluation of place-based approches to environmental issues (integrated management) using several case studies, emerging trends, compares approaches and frameworks applied, and makes observations regarding the state of knowledge on long-term impacts.
The projects studied monitored both evironmental outcome-oriented indicators and process-based indicators looking at governance, planning, stakeholder particpation and data dissemination. The inclusion of governance indicators represents a step forward in the field, however, information on longer-term environmental outcomes is still limited. The paper documents succesful improvements in environmental conditions due to the place-based approach employed, including the Lake Champlain Basin (U.S. and Canada), the Rhine River Basin (Europe) and South Tabacco Creek (Canada).
The paper identifies trends, including an important role for both local and national governments, noting in particular a role for regional and national governments in setting standards, guiding good practices, and providing higher level data. The paper highlights the importance of institutional reviews to ensure priorities and allocation of resources are appropriate for the carrying capacity of local environmental systems. As a result, monitoring should include adaptive designs that can respond to stakeholder concerns and changing planning priorities. The paper suggests that outcome and process-based monitoring are important evaluative components as well as effective learning and motivational tools for stakeholders that are developing strategic plans and policies.
Evaluating Place-based Initiatives: Challenges, Recent Trends and Basic Questions in Planning the Evaluation
Sanjeev Sridharan goes back to evaluation basics, building on Ray Pawson’s observation that “nothing is so practical as a good theory”. The paper proposes ten top questions that evaluators should ask before designing an evaluation of a place-based approach. The questions use a theory-based approach to documenting existing evidence and strategically choosing evaluation questions that shift the evaluation from seeking to understand whether the program is working or not, to understanding what it is about the program that makes it work, how, and under what conditions. A key question in this regard, is how the evaluation will accommodate different mechanisms employed in different sites. Learning is put front and centre, focusing on evaluation questions that will generate useful information, that can effectively be answered, and contribute to the overall evidence base that underpin the initiative. This type of learning, it is argued, is very different from performance management, but is a core component of what evaluation should aim to achieve.
By surveying place-based evaluators, the paper documents common approaches including Theory of Change, developmental evaluation, and participatory evaluation. Emerging methods cited include observational studies, network analysis, respondent-driven sampling and system dynamics. Establishing the timeline for expected outcomes is one evaluation technique that is also recommended to address the challenge of longer-term outcomes that cannot be documented with the current evaluation cycle.
Bellefontaine, Teresa and Robin Wisener, 2011. “Evaluation of Place-based Approaches: Questions for Further Research”. Policy Horizons Canada.
Bizikova, Livia, Swanson, Darren, and Dimple Roy. 2011. “Evaluation of Integrated Management Initiatives”. Working report prepared for Policy Horizons Canada.
Burstein, Meyer and Erin Tolley. 2011. “Exploring the Effectiveness of Place-based Program Evaluations”. Working report prepared for Policy Horizons Canada.
Policy Horizons Canada. “Evaluating Place-based Approaches: An Open Window?” Policy Brief.
Shepherd, Robert P. 2011. “Place-based Evaluation in a First Nations Context: Something Old, Nothing New, Often Borrowed, and Frequently Blue”. Working report prepared for Policy Horizons Canada.
Sridharan, Sanjeev. 2011. “Evaluating Place-Based Initiatives: Challenges, Recent Trends and Basic Questions in Planning the Evaluation”. Working report prepared for Policy Horizons Canada.
- Date modified: