International conservation and development policy is awash in indicators. In 2000, the eight United Nations (UN) Millennium Development Goals included 18 targets with 60 indicators (UN 2008). Fifteen years later, the UN’s 2030 Agenda for Sustainable Development has increased to 17 Sustainable Development Goals (SDGs) with 169 targets and hundreds of indicators designed to track progress through 2030 (UN 2015). From a measure on the “proportion of the population living under the national poverty line” to one on the “Red List Index of Threatened Species,” these indicators are already being used to monitor change over time in relation to specific goals and targets and are shaping policy discourse and the nature of program interventions in countries across the globe.
The expanding role of indicators forms part of a broader movement toward evidence-based policy and management that has coursed through virtually all sectors of economy, environment, and society. Drawing inspiration from medicine (e.g., Sackett et al. 1996), this movement seeks to streamline government processes, direct financial resources, and improve policy outcomes through decision making that limits the influence of politics and values in favor of reliance on evidence (Adams and Sandbrook 2013; Pawson 2006; Sanderson 2002). Efforts to increase reliance on evidence while eschewing overt political influence in policy making have spread to most development sectors over the past 15 years, including forestry. Advocates in conservation have led the way (Pullin and Knight 2001, 2003; Sutherland et al. 2004), but the trend has expanded to include forestry more generally (CIFOR 2016; Petrokofsky et al. 2011). Indeed, the forest sector has seen some of the most significant movement within international development in terms of indicator formulation even as actual use of these indicators has remained lacking (Grainger 2012).
Efforts to develop appropriate indicators and build the evidence base on what policies, programs, and practices are effective in delivering on sustainable development objectives face a difficult challenge in the forest sector: the results of forest conservation and management may take years, even decades, to materialize, but interventions usually last no more than five years. For example, the impact of investments in tree planting or stand improvement are unlikely to be apparent for at least 10 and likely 20 or more years. Further, there may be a substantial lag between environmental impacts and socioeconomic ones. Is there a “crystal ball for forests” that might help address this challenge? How might scholars and decision makers better track and understand long-term impacts in the forestry sector?
This article addresses these questions through a review of the broad interdisciplinary literature that assesses forest conservation and management impacts on biodiversity conservation, climate change mitigation, and poverty alleviation in developing countries. It focuses particular attention on whether and how current research grapples with assessment of long-term impacts. We first provide an overview of indicators for sustainability in theory and practice. We then move to the heart of our review, which examines both applied and critical perspectives on the impacts of different kinds of forestry interventions, from industrial forest plantations to strictly protected areas. We summarize current trends and discuss similarities and divergences between studies representing these two perspectives. Our review concludes by highlighting some especially promising research frontiers for addressing the question of how to assess impacts for which the incubation period may be long. We develop the idea of identifying and using predictive proxy indicators (PPIs), measures that can be tracked over the short term but have the potential to predict the longer-term social-ecological impacts of forest conservation and management interventions. We argue that PPIs and other creative new approaches are needed to shed critical light on the implications of current policy and practice for a variety of possible forest futures, an important task as an era of new SDGs dawns.
Sustainability Indicators in Theory and Practice
Indicators—concise measures that provide information about the condition and trajectory of a system (Bell and Morse 2008)—are powerful forms of measurement that can focus attention on particular areas of interest, influence the content of development interventions, and galvanize support for particular issues or groups of people. Indicators are a technology of governance that shape human thought and action (K. Davis et al. 2012).
Indicators necessarily simplify complex social processes and phenomena into easily digestible, usually numerical, representations that can be used to compare, evaluate, and rank performance, as well as set standards against which to measure performance. Indicators can be used at multiple spatial and temporal scales, from national indicators like gross domestic product (GDP) that enable comparisons of economies across countries to micro-level indicators on tree seedling survival that provide information on specific plots of land.
An important aspect of indicators is their potential to illustrate change over time and to help predict future change (Garrett and Latawiec 2015). Analytically, indicators can also help clarify causal relationships, such as, for example, the relationship between forest ecosystem management and carbon storage or intergenerational well-being (Miller and Wahlén 2015), or can aid in policy and decision making and assess program impact (Garrett and Latawiec 2015).
A conceptual model can help clarify the role of indicators over time in the case of forestry interventions, particularly those supported by external funders (Figure 1). Planners identify goals, activities, and indicators, as well as resources, to support implementation in a preintervention phase. At this stage, the intervention begins to take shape, often using a “results framework,” which traces inputs through to outputs, outcomes, and ultimate impacts on development objectives (e.g., World Bank 2014). Such a framework can help to structure monitoring and evaluation (M&E) for a particular intervention. Given donor incentives, the focus of M&E almost always occurs during implementation or immediately following it. Best practice indicates that baseline data on key indicators be collected at the start of an intervention, though in practice this step is often not taken or occurs after implementation has begun (Ferraro 2009; IEG 2013). Information on selected indicators is then gathered at specific intervals during implementation and at the end of the intervention (see Figure 1). This focus on near-term results is typically replicated in the wider literature on the impacts of forest-related interventions. Few studies devote attention to the “afterlife” of forest policies, programs, and projects. Consequently, knowledge of whether and how intervention impacts persist, fade away, or change over the long term remains lacking.
As the international development community turns toward implementing the SDGs and measuring progress toward their achievement, policy makers, practitioners, and researchers alike have raised concerns about indicators, from the quality of data collected to the absence of indicators on critical topics or even entire geographic areas or groups of people. Commenters (e.g., Lu et al. 2015; UN 2016) underscore the importance of developing better indicators to allow for increased comparison across regions, countries, and subnational levels, as well as to shift from a quantitative focus toward a mixed methods approach that complements quantitative data with more qualitative indicators that can measure new areas.

Conceptual model of forestry intervention effects over time.
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103

Conceptual model of forestry intervention effects over time.
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
Conceptual model of forestry intervention effects over time.
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
Critics in social sciences such as anthropology emphasize that indicators are not neutral. Rather, they are products of particular political-institutional cultures and represent and produce specific forms of knowledge (K. Davis et al. 2012; Merry 2011; Strathern 2000). Based on this discursive power, indicators operate as technologies of global governance that influence thought and action with consequences for the implementation and impacts—on society and the environment—of the policies, programs, and projects of which they are a part (K. Davis et al. 2012; Merry 2011).
For instance, when policy makers or conservation practitioners choose which indicators to measure or which information to include in reports, they make a choice about what data are shared and what are excluded, producing a seemingly “objective truth” in a subjective, selective manner (Wahlén 2014). The use of indicators also affects the process of standard setting and decision making. Kevin Davis and colleagues (2012) describe how indicators simplify complex social processes into more easily understood representations of reality that are then used for comparison or evaluation. They argue that this simplicity, which forms part of the appeal for policy makers, obscures a more complete—and complex—picture. As examined in earlier critical scholarship on large externally driven development programs (e.g., Ferguson 1994; Li 2007; Scott 1998), such simplifying measures help make populations and landscapes more legible and therefore controllable while at the same time shunting away discussion of the inherently political nature of indicator formulation and use. With time, indicators subsume difference to homogenize understandings of sustainability in ways that become taken for granted. For example, Laureen Elgert (2015) details how contestation over what measures should be used to market “responsible soy,” that is, soybean production that does not come at the expense of forest loss, is rendered invisible under simple indicators for certification.
In these ways, indicators can be seen as strengthening an “audit culture” (Strathern 2000) and the seepage of the corporate sector into the domains of state and civil society (Merry 2011). Indeed, indicators are understood as a paradigmatic tool in an era when neoliberal ideas and reforms suffuse so many aspects of environment and society, including conservation (Brockington and Duffy 2010; Igoe and Brockington 2007) and forestry more generally (Corson 2011; Humphreys 2009).
Assessing Forest Conservation and Management Impacts in Developing Countries
The literature assessing forest conservation and management impacts on biodiversity conservation, climate change mitigation, and poverty alleviation in developing countries is vast and growing rapidly. It encompasses a wide range of geographies, substantive foci, and theoretical and methodological approaches.
We identify two major strands of research in current scholarship. Following Chris Sandbrook and colleagues (2013), these can be distinguished broadly as social research for forest conservation and management and social research on forest conservation and management. The former, more applied literature seeks to influence and improve policy and practice in relation to forests. This literature tends to take the idea of an intervention as given, following the medical analogy of a treatment to a patient. It typically, though not always, relies on the use of quantitative data and statistical methods. This strand is based on a positivist epistemology and finds a disciplinary home primarily in economics and political science as well as crosscutting fields such as conservation biology, common property, and development studies. Much of this work relies on existing data sets and does not entail on-the-ground fieldwork.
The second strand in the literature aims to understand how forest-related interventions work as social practices and situates them in relation to larger social and political-economic processes and issues. This more critical work usually, but not always, uses qualitative and ethnographic methods, which require fieldwork in specific locations. This research often explores power dynamics in social relationships—including in relation to the environment as an object of study—and the importance of ideas and discourse in shaping behavior. In contrast to the more applied literature, this critical work tends to question the very idea of a discrete “intervention” with a clear start and end, seeing instead messy entanglements among different actors, including donors and “recipients.” Such research takes place largely in anthropology, geography, and sociology and allied interdisciplinary fields, such as political ecology.
Our review of this broad literature describes both strands and discusses disjunctures, as well as commonalities between them. We use traditional review methods (Jesson et al. 2011) to identify relevant studies, including studies using or critically engaging with the use of indicators and metrics to assess the effects of a range of forest-related interventions. We draw on primary studies and relevant extant reviews. Given our focus on long-term impacts, we do not claim our review is comprehensive of the literature on forest sector impacts1 even as we believe it provides an accurate sketch of trends in this broad area of inquiry. Our review includes examples from a range of forestry interventions, from production forestry to protected area (PA) establishment and management, but like much of the broader literature, we emphasize forest conservation, especially in relation to PAs.
Experimental and Quasi-experimental Impact Evaluation Approaches
The early part of the last decade was seminal for the field of impact evaluation on forest-related interventions. “Evidence-based conservation” was first articulated and developed during this period (Pullin and Knight 2001, 2003; Sutherland et al. 2004), and the desire to gain more and better evidence of intervention impacts led to calls for experimental/quasi-experimental impact evaluation techniques (Ferraro and Pattanayak 2006).
Before 2006, approaches to evaluate impacts mostly relied on M&E frameworks that had very little emphasis on using experimental or quasi-experimental designs and methods (Ferraro 2009; Sutherland et al. 2004). From the 1990s, approaches such as population monitoring, rapid assessments, and scorecards formed the principle tools used to understand the status and change in social-ecological conditions in relation to forest conservation and management (Stem et al. 2005). These approaches primarily furnished monitoring information rather than summative evaluation data. They also suffered from the fundamental evaluation problem of attribution—that is, they were unable to answer the question: Were the changes chronicled due to the intervention or other factors? This problem has continued to bedevil efforts to understand forest intervention impacts as illustrated by a recent independent evaluation of the forests portfolio of the largest donor in the forestry space, the World Bank. That review concluded “the monitoring and reporting systems of the World Bank forest sector operations are inadequate to verify whether its operations are supporting forest management in an environmentally and socially sustainable way” (IEG 2013: 101).
The World Bank and other donors and actors in the forestry sector have increasingly recognized and taken steps to address this stubborn problem. Awareness of the potential value of impact evaluation has become more widespread within the international forestry community over the past several years. For example, the use of experimental (e.g., randomized control trials, or RCTs) and quasi-experimental methods (henceforth, we refer to these two methods as IE for impact evaluation) to evaluate conservation policies and programs has increased considerably during this period, though the overall number of such evaluations remains relatively small (Figure 2). These approaches have gained traction as they are judged to provide the most robust strategy to estimate intervention effects based on accounting (controlling) for confounding factors and other biases that impede attribution (Puri and Dhody 2016). Experimental approaches identify the causal effect of a policy intervention by random assignment of treatments or alternative causes over a range of experimental conditions. Quasi-experimental approaches, on the other hand, strive for randomization of treatment assignment by carefully eliminating alternative causes through data design or matching procedures (Ferraro 2009; Ferraro and Pattanayak 2006).
Figure 2 shows an increase in peer-reviewed impact evaluations of forest conservation interventions over the past decade and a half. Ecological impacts have been much more commonly studied than social or human well-being impacts. Typical indicators of ecological impacts used include changes in forest or tree cover or condition, extent of resource use, and fire frequency (e.g., Andam et al. 2008; Nelson and Chomitz 2011; Nolte and Agrawal 2013; Shah and Baylis 2015). Much less common are more refined indicators of ecological change such as species-level or forest degradation measures (Miller 2013; Vincent 2016).
Those studies that have examined social outcomes have typically focused on understanding socioeconomic or poverty-related effects of conservation programs (McKinnon et al. 2016). Given that the particular impacts are more easily amenable to quantification than some other aspects of human well-being, it is not surprising that IE studies have emphasized them. In this literature, household surveys are the primary way in which scholars have constructed indicators of socioeconomic impact such as wealth ranking, asset valuation, and access to employment, education, water, and electricity (e.g., Clements et al. 2014; Miranda et al. 2016).
Figure 3 shows the geographic distribution of studies evaluating the impacts of forest conservation interventions on human well-being and ecological outcomes. Most striking is how few countries are covered by even one IE study on the human welfare impacts of forest-related interventions. Through early 2017, we could identify only 17 relevant studies published in peer-reviewed journals. Only four countries included more than one such study, and relevant

Evolution of (quasi-)experimental forest conservation impact evaluations, 2001–2016.2
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103

Evolution of (quasi-)experimental forest conservation impact evaluations, 2001–2016.2
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
Evolution of (quasi-)experimental forest conservation impact evaluations, 2001–2016.2
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103

Geographic distribution of (quasi-)experimental studies evaluating the impacts of forest conservation interventions on (a) human well-being outcomes, 1990–2015, and (b) ecological outcomes, 2001–2016.3
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103

Geographic distribution of (quasi-)experimental studies evaluating the impacts of forest conservation interventions on (a) human well-being outcomes, 1990–2015, and (b) ecological outcomes, 2001–2016.3
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
Geographic distribution of (quasi-)experimental studies evaluating the impacts of forest conservation interventions on (a) human well-being outcomes, 1990–2015, and (b) ecological outcomes, 2001–2016.3
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
By comparison, IEs of ecological impacts is more common, if still limited. Of 33 studies, only four countries, Costa Rica (7), Indonesia (4), Thailand (3), and Brazil (3), constitute half of the total IEs on ecological outcomes of conservation policies. Most parts of the world have not been touched by these evaluations. Again, there have been no published forest intervention impact evaluations across the majority of Africa and Asia-Pacific regions.
There are at least two reasons for the greater frequency and geographic spread of IE studies examining ecological impacts. The first relates to technological changes. Advances in remote sensing have enabled the collection of more detailed data on ecological outcomes, and increasing computing power has facilitated quantitative analysis of large data sets (i.e., “big data”). Many of these data sets cover comparatively long time periods and are now widely available for analysis. Technological changes have had less influence on the collection of socioeconomic data, though recent studies (e.g., Jean et al. 2016) suggest this situation is changing rapidly. A second reason for the preponderance of ecologically focused studies is the higher cost of collecting household data in far-flung areas of developing countries. Scalable, consistent data over long time periods on human well-being indicators in such areas remain quite rare. Nevertheless, in the context of new donor strategies and emphasis on interlinkages among the SDGs, there is increasing demand for studies examining both social and ecological impacts. A growing number of studies suggest this demand is beginning to be met (e.g., Alix-Garcia et al. 2012; Ferraro et al. 2011; Miranda et al. 2016; Pfaff et al. 2014; Scullion et al. 2014).
IE approaches have evolved to cover more countries and have generally increased over time, but they have also evolved substantively. IE studies on conservation and forest management are increasingly moving from just reporting the estimated quantitative difference a given intervention has made to a given outcome indicator (“average treatment effect”) to examining differential impacts according to different subpopulations. For example, Katharine Sims (2010) found higher inequality on average for communities near national parks, suggesting higher gains from protection for rich households. Experimental (Jayachandran et al. 2016) and quasi-experimental (Alix-Garcia et al. 2012) studies have found that payments for ecosystem services (PES) programs in Uganda and Mexico, respectively, were effective in avoiding deforestation. IE studies have also increasingly sought to identify causal mechanisms linking the intervention to ultimate impacts (e.g., Canavire-Bacarreza and Hanauer 2013; Ferraro and Hanauer 2014; Ferraro and Pressey 2015). These studies have begun to explore what happens within the “black box” of program and policy implementation that leads to observed outcomes at the end of the program period. For instance, Paul Ferraro and Merlin Hanauer (2014) estimated the individual contribution of various mechanisms affecting poverty in and around Costa Rica’s PAs. They found that, on average, PAs reduced poverty and that two-thirds of the total reduction could be attributed to opportunities created by tourism.
Critical Approaches to Forest Impact Evaluation
An important literature taking a more historically informed and critical view of forest conservation and management interventions has proceeded in parallel with the more applied studies described above. This strand of scholarship emphasizes qualitative and ethnographic methods such as key informant interviews, focus group discussion, participant observation, and discourse analysis, usually in specific geographic sites, and devotes special attention to power relations and discourse. Data from Madeleine McKinnon and colleagues’ (2016) systematic map of the impact of conservation interventions on human well-being provides a sense of the broad trends in this literature (as well as the more quantitative literature reviewed above). Figure 4 charts all studies of interventions in forest biomes identified in that map over the past 25 years, showing the frequency of study designs classified as qualitative, quantitative, or mixed methods. Qualitative studies can be used as a proxy for studies of a more critical bent, recognizing that research using qualitative methods need not always take such an approach.
The overall trend for all three types of studies is upward. It is noteworthy that qualitative studies on the impacts of forest conservation interventions on human well-being were the first to be published and preceded by several years the first quantitative or mixed methods studies. A volume of qualitative and critical studies has accumulated such that reviews have now

Frequency of quantitative, qualitative, and mixed methods approaches to assessment of forest conservation on human well-being, 1990–2013.4
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103

Frequency of quantitative, qualitative, and mixed methods approaches to assessment of forest conservation on human well-being, 1990–2013.4
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
Frequency of quantitative, qualitative, and mixed methods approaches to assessment of forest conservation on human well-being, 1990–2013.4
Citation: Environment and Society 8, 1; 10.3167/ares.2017.080103
The production of qualitative studies on the social impacts of forest conservation has in-creased steadily, though like the other types of studies on this topic, it has dipped in the most recent year for which full data was available (see Figure 4). Despite the growth in quantitative and mixed methods approaches, qualitative studies remain the most prevalent type of assessment of the social impacts of forest conservation. The number of qualitative studies may be even higher given that books and monographs were largely left out of the systematic map, as is common in such evidence syntheses. This gap in evidence is potentially significant given the large number of seminal and influential book-length critiques of conservation and other forest-related interventions.
Disjunctures between Critical and Applied Approaches
Here we highlight three disjunctures between critical and applied research: the type of impacts studied, the place of theory, and the stance toward reliance on specific indicators.
First, these two streams of literature emphasize different outcomes. To generalize, critical literature has tended to focus on social impacts, while applied literature has focused more on environmental ones. Studies in the more critical vein have often found that local environmental outcomes of conservation are the artifacts of socioeconomic and political drivers and processes at multiple and higher scales that affect costs and benefits at local levels. A major implication of this perspective is that local success or failure of a conservation program in terms of ecological or social outcomes is attributed to factors independent of that local context. This emphasis also means that the short- or long-term success of any conservation policy depends on the unfolding and interplay of socioeconomic and political factors and processes operating at local, national, and global scales.
Much of the qualitative, critical literature on the social impacts of conservation efforts has found them to be negative and highly detrimental to local people, local institutions, and local ecology. Though positive findings do exist in this scholarship (Holmes and Cavanagh 2016), it has often seen conservation efforts as a ploy to increase state power through bureaucratic expansion in forest domains (e.g., Neumann 1998; Peluso 1993). Such conclusions contrast with findings from literature that relies on quantitative methods or IEs, which appear more likely to find that conservation and protection efforts have positive impacts on poverty. IE studies of PAs, payments for environmental services (PES) schemes, and integrated conservation and development projects (ICDPs) in different contexts have been found to alleviate poverty (e.g., Andam et al. 2010; Arriagada et al. 2015; Bauch et al. 2014). The divergent conclusions that tend to characterize these two bodies of literature may be due to differences in methods and scale and type of analysis or different disciplinary incentives such as likelihood of publishing positive findings in some journals compared to others.
Comparison of IEs and more critical studies from India illustrates this disjuncture and its implications for consideration of long-term impacts. Using an econometric approach, E. Somanathan and colleagues (2009) found that community-managed forests in the Indian Himalayas are not only generally better conserved than state-managed ones but are also cheaper to manage. These results led them to argue that decentralized forest management should be given wider attention and focus in developing countries that lack sufficient financial resources for conservation. However, more critical studies suggest that community management practices may only be superficially adopted by communities and promulgated by the state, often to keep up appearances to obtain funds from donors who prioritize decentralized, local participation (Nayak and Berkes 2008). Socioeconomic fissures within communities and a quest for wage generation may also dilute the useful benefits or pro-forest attitudes thought to be obtained from sui generis community participation in local resource management (Baviskar 2004; Rangan 1997). Recent quantitative evidence from the region also suggests that external payment for participation may crowd out more intrinsic motivations to conserve forest resources (Agrawal et al. 2015). Other, more critical studies (e.g., Nayak and Berkes 2008) suggest that community forest management (CFM) in some villages may have detrimental effects on neighboring villages because of decline in resource availability. Such a result would bring high costs for the poorest, most forest-dependent people5 in “overlooked” neighboring villages, contradicting the low-cost conservation argument Somanathan and colleagues (2009) have made.
Differences in the operationalization of scale comprise another reason for disjunctures in focus and outcome between critical and applied approaches. Qualitative studies tend to be carried out in specific sites at a local scale such as a village, local government, or a cluster of these units. In contrast, applied approaches typically study large samples of forests, PAs, or land parcels in order to try to address the problem of selection bias. Both approaches have their limitations: the small sample sizes in qualitative studies can limit their generalizability to different areas and contexts, while ignoring the local context and the factors that operationalize cause-effect relationship may undermine the robustness and relevance of quantitative studies. A combination of these two perspectives has the potential to produce deeper, richer, and more generalizable insights about the phenomenon of study while still retaining information about its external validity in wider contexts. A focus on the issue of selection bias may help in the challenging task of integration as larger-scale quantitative studies seek to mitigate such bias explicitly in their identification strategies, while more focused, qualitative studies may embrace it as fertile ground for detailed, new understanding.
The second disjuncture we highlight relates to theory. One critique of current IE approaches is that they often are not based on compelling theory of why the impact being studied may occur. Ever more precise quantitative estimates of impacts are not matched with a similar attention to theory about the processes generating impacts. For example, Arun Agrawal (2014) highlights the relative lack of theory relating to PAs, a major focus of IEs in the forest sector today. A unified body of theory on the creation, management, and impacts of PAs comparable to that on CFM does not yet exist. Robust conceptual frameworks for CFM and common pool resource management more generally have been developed, often based on qualitative case study information, and tested over the past several decades (e.g., Agrawal 2001; Ostrom 1990, 2009). More critical work (e.g., Goldman 1997) has also contributed to the development of theory relating to CFM process and outcomes. The point is that without explicit theory, IE researchers are forced to rely on piecemeal tests of causal effects and a search for “possible mechanisms to explain estimated effects based on context-specific knowledge of the PA, country, or region” (Agrawal 2014: 3909).
For this reason, some argue that theory may be just as important as evidence. Ben Cashore and colleagues, for instance, recommend careful documentation and analysis of “pathways of influence” when trying to understand the likely durability of forest policies and their effects (Bernstein and Cashore 2012; Cashore et al. 2016). Such analysis takes the past seriously but is based on plausible scenarios for how change has and could take place, and aims to help enable learning among policy communities.
The third disjuncture we identify relates to the place of indicators. Critical perspectives do not take indicators at face value but focus attention on their social construction. Such perspectives highlight how indicators convert complex social realities into seemingly objective and unambiguous measures (K. Davis et al. 2012). Classic examples relevant to this review include the development of scientific forestry in Europe (Scott 1998) and in Java (Peluso 1992). Crucially, the process of quantifying complex social phenomena involves a range of subjective judgments, assumptions, values, and theories of the world. At the same time, the numerical form of indicators and their aura of objectivity conceal these interpretations and the theories embedded in them. As a result, “the outcomes appear as forms of knowledge rather than as particular representations of a methodology and particular political decisions about what to measure and what to call it” (Merry 2011: S88). Such considerations are beyond the scope of most applied approaches, which, even while they may recognize limits of indicators, take them as given.
Critical approaches often devote significant attention to examining the underlying quality of data to make judgments about what can be reasonably inferred from it. Morten Jerven (2013), for example, highlights the poor quality of many indicators commonly used to assess economic development in Africa. He concludes that high-quality data in development, especially across countries, are scarce, and recommends attention to what we demand existing data do. This finding has potentially serious implications for IE approaches that rely on such quantitative or simplified data. It suggests the importance of researchers, policy makers, and others interrogating the data they use to understand potential limitations, regardless of the approach used in collecting the data. Users of data, both quantitative and qualitative, should ask who made this observation and under what conditions. Failure to ask such questions invites potentially harmful disconnects between numbers and reality on the ground.
Commonalities between Critical and Applied Approaches
Our review suggests two key commonalities in critical and applied research on indicators and impacts in the forest sector. First, neither applied nor critical perspectives devote significant attention to longer-term impacts, focusing instead on shorter-term impacts and outcomes. Second, both strands of literature tend to neglect sequencing and relationships among outcomes over time. Below we briefly elaborate on these two points and highlight exceptions that do explicitly consider longer-term time horizons and future states.
Both applied and critical perspectives largely ignore consideration of longer-term impacts but instead focus more on impacts occurring in the near term following a given intervention (see Figure 1). By definition, “evidence-based” policy requires information from the past. The current generation of IE studies focuses on data collected at some previous time, usually at preintervention and immediate postintervention moments. The past is an unclear guide to the future, however, particularly a narrow extracted view of the past based on specific indicators. Thus, the relevance of findings of ever more rigorous impact evaluations to the future remains unclear. There are some exceptions examining the impacts of the establishment of PAs more than 25 years before measuring socioeconomic and environmental outcomes (Andam et al. 2010). Such work remains rare, however, and often relies on panel data from two time periods, which provides little information on the temporal dynamics of development and conservation intervention impacts. Knowing, for example, that the immediate effects may have been negative but become positive over the long term would provide a step change in our understanding.
A similar point about focusing on near-term impacts can be made for many critical studies even as the evidence base is different. Implicit suggestions about the potential relevance of findings to the future can be found in both kinds of studies, but explicit reflection on postintervention impacts remains rare. When researchers do examine such impacts, they tend to consider forest cover or other ecological outcomes rather than socioeconomic impacts, which may take longer to materialize. The interplay of complex and multiple causes in an environment of uncertain socioecological, economic, and ecological conditions mean the long-term impacts of, for example, a forest conservation policy or program can be very different from short-term impacts (Miteva et al. 2012). Scholars have, therefore, recognized the need to evaluate multiple outcomes from forest conservation and management over time (e.g., Agrawal and Benson 2011; Agrawal and Chhatre 2011). However, our review suggests that empirical work in response to such calls is uncommon.6
Some researchers recognize the importance of paying attention to longer-term impacts while simultaneously noting how few institutions and organizations have traditionally measured or tracked impacts over time in a systematic way. For instance, Mark Buntaine and colleagues (2017) find that out of all the Organization for Economic Cooperation and Development (OECD) Development Assistance Committee members, only the Japan International Cooperation Agency (JICA) has a program to monitor its project’s impacts after completion. Even fewer conservation or development institutions investigate the extent to which project outcomes are sustained over time (Myers et al. 2014; Woolcock 2013). Typically, case studies track project outcomes over time in one particular project or one particular area. While potentially useful in drawing attention to the need for more rigorous evaluation and study over time, including the development of theory and longer-term indicators or indicators that can be used to tease out long-term impacts and outcomes, the focused nature of these studies make it difficult to generalize beyond particular sites.
The few studies that do consider longer-term impacts underscore the importance of post-project evaluation in understanding factors leading to durable results and in determining project sustainability. Bronwyn Myers and colleagues (2014) illustrate how assessing the long-term impacts of a project several years after its completion can uncover different results than the results found at project closure. In their example from Indonesia, district officers trained in community fire management by a project applied these skills elsewhere after the project ended, a finding highlighting impacts that were only visible over the long term. The researchers further suggest that project indicators designed before project implementation or at the early stages of a project may not fully capture the project’s role within local development trends, because such indicators are designed to capture anticipated or targeted outcomes and therefore miss unintended impacts or outcomes.
Other scholars have also shown how postproject assessments may uncover different results than studies conducted at project closure. In Costa Rica, for example, Bruno Locatelli and colleagues (2008) found differences in the income-related impacts of a program on landowners during the short term, medium term, and long term. To illustrate, upper-class landowners experienced slightly negative impacts on their income during the short term but then experienced positive impacts over the medium and long term as a result of the entry and investment costs incurred by this group of landowners at the beginning of the project that then paid off over the longer term.
Christie Lam and colleagues (2016) use surveys from displaced and nondisplaced households in a rural Nepalese community 6 years and 13 years after their relocation due the expansion of a strictly protected conservation area. By showing that displaced households fare better than nondisplaced households in terms of food security and land productivity in the longer run but experience a breakdown of social ties and erosion of traditional safety nets, Lam and colleagues demonstrate how longer-term, mixed methods studies can generate a more complete picture of impacts and outcomes. This study also raises questions of whether and how indicators can capture existing power relations and social hierarchies that influence conservation outcomes and underscores how combining qualitative and quantitative methods demonstrates social effects of conservation that are not necessary quantifiable or cannot be captured through econometric modeling.
Other scholarship suggests the importance of appropriate incentive structures to ensure that organizations consider long-term impacts and outcomes. Catherine Wahlén’s (2014) study in Papua New Guinea illustrates how a nongovernmental organization’s (NGO) focus on representing projects as successful to donors created pressure for NGO staff to emphasize particular representations of reality in their project reports and discouraged them from taking the time to more critically reflect on what actually works well in practice. As a result, the NGO focused on shorter-term gains and missed an opportunity to adapt its activities and efforts to achieve longer-term results.
Pressures to deliver “success” in the short term can have a variety of consequences, including at the project design phase, where implementation form may be dictated more by specific indicators and evaluation strategies than on-the-ground exigencies. For example, research on forestry carbon projects in Sierra Leone and Ghana shows how measurement process design has been made to fit within long-standing assumptions about deforestation, which lead toward strict forest protection measures or forest plantations rather than more nuanced approaches that consider carbon in the context of diverse human-forest relationships (Leach and Scoones 2013). Recent research from East Africa highlights discursive change around REDD+ projects designed to appeal to donors, but notes continuities in practice (Lund et al. 2017).
A second commonality among both applied and critical literature on forests, indicators, and impacts is the largely unexplored nature of the sequencing and interaction of ecological and social outcomes over time. Scholars generally tend to focus primarily on one outcome or a narrow set of indicators rather than explore how multiple outcomes or indicators interact to produce particular impacts and outcomes, though there are exceptions (Agrawal and Benson 2011; Agrawal and Chhatre 2006). Evidence-based IE studies also lack a focus on how socioecological interactions and outcomes happen over time and influence the project outcomes. There are some notable exceptions, however, such as a study examining the trade-offs between short-term and long-term benefits spatially and temporarily in a mangrove protected area in Tanzania (McNally et al. 2011). The authors show that households living in the study area experienced immediate losses in consumption because of restricted forest resource access, but over time gained income and other benefits from increases in shrimping and fishing as a result of mangrove protection.
The scarcity of long-term, postproject studies is in part due to the challenges of capturing and analyzing longer-term impacts and explaining relationships between outcomes over time. For instance, long time lags between interventions and results represent a barrier to using experimental and quasi-experimental methods (Ferraro 2009), which may contribute to the disproportionate number of case study approaches and other studies that rely on qualitative methods in analysis to address longer-term impacts.
Frontiers in Assessing Long-Term Impacts: Predictive Proxy Indicators
Forestry interventions come and go. They are revised, are supplanted, and fade away. Sometimes they persist under new guises and in related forms. But as the foregoing suggests, careful investigation of postintervention impacts remains rare. This point holds true especially for more applied, quantitative research but is also often the case for more critical perspectives. One promising approach for generating better evidence on the effectiveness of forestry interventions is the use of predictive proxy indicators (PPIs). Such indicators aim to provide credible information about a future change or state based on observed historical or near-term evidence. Simply put, PPIs are measures of outcomes taken during program implementation that are predictive of longer-term impacts.
The advantages of PPIs are that they are explicitly forward-looking and can provide information on which types of intermediate targets pursued as part of forest-related interventions show a strong association with longer-term impacts. They are especially relevant in the context of interventions of relatively short duration (three to five years), where full results may not accrue until well after implementation but stakeholders desire evidence on likely impacts. Aside from this accountability function, PPIs may be attractive to decision makers and other stakeholders based on how they are developed. Specifically, PPIs are based on a plausible theory of change explaining why a given indicator or set of indicators are likely to predict a particular change or state as a result of an intervention, and careful development of such theory can strengthen program design. In addition, PPIs can help bridge the practitioner-research divide that characterizes indicator development in the forestry and broader sustainability agendas (Rasmussen et al. 2017). Robust, socially relevant PPIs are more likely to emerge from the synthesis between theoretical perspectives and expert opinion, as recent research on indicators for forest management in Victoria, Australia (Ford et al. 2017), and within the World Bank’s forest portfolio (Miller and Wahlén 2015) suggests.
Despite their potential, PPIs in the forestry sector remain rare. Recent research from other sectors and forestry suggests that it is possible to identify credible PPIs. For example, evaluation in education has found that teacher quality is associated with a range of long-term benefits. Using two decades of data on more than one million children in the United States, Raj Chetty and colleagues (2014) found that student assignment to high “value-added” teachers (measured by student test scores) in grades 4 through 8 largely predicted long-term outcomes such as future earnings, college attendance, and teenage birth rates. A recent Program on Forests (PROFOR) study has identified a set of theory-based PPIs for development objectives in the forestry sector, including poverty reduction, biodiversity conservation, and climate change mitigation (Miller and Wahlén 2015).
The PROFOR study compiled the indicators used in a representative sample of projects from the World Bank’s forestry portfolio from 1990 to 2013 and scored them for their predictive potential based on “applied forward reasoning” (Levin et al. 2012), that is, whether they implied a plausible logic or theory of change for why a given indicator or set of indicators have predictive power. Results were discussed in a workshop of World Bank staff and other experts, and then further refined to identify clusters of indicators that appeared to have potential as PPIs. For example, an intervention seeking to promote sustainable forest-related income would include indicators on people in target communities with increased monetary or nonmonetary benefits from forests, people in target communities with secure access and use rights, and the extent to which forest extraction activities align with biodiversity-friendly management practices. All three indicators are necessary to track not only whether forest-related income increases but also if relevant communities have incentives to invest in sustaining such income based on secure property rights and if the activities that generate it are ecologically sustainable over the long term. Together, these indicators capture key factors that should enable relevant institutions and practices supported under the intervention to endure after its completion (Agrawal 2001).
Single, “silver bullet” indicators are likely to be difficult if not impossible to identify for many kinds of forestry interventions, but clusters of indicators such as the example above may yield greater predictive power (Miller and Wahlén 2015). Even as such indicator clusters are necessarily a simplification, their potential effectiveness comes from taking seriously critiques of sustainability indicators generally as insufficiently attentive to power dynamics and prone to dangerous oversimplification. For example, they may take into consideration indicators not directly linked to project goals, such as one measuring security of land tenure or power relations among project participants, in a project designed to achieve biodiversity outcomes.
Current work has identified a set of theory-based indicators, but more research is now needed to empirically test potential PPIs, including by collecting postproject data to evaluate projects and indicators over time. Doing so has important implications for theorizing and shedding empirical light on longer-term impacts in the forestry sector but also for development policy and practice. For example, empirically validated PPIs might be used to help inform the design and implementation of forestry and other development investments and to monitor progress toward the SDGs. Already, some early hypothesized PPIs are beginning to be used in World Bank forestry and other donor projects (Miller and Wahlén 2015).
While PPIs offer a potentially valuable and cost-effective opportunity for monitoring longer-term outcomes, there are some limitations to their use. PPIs, as indicators generally, are often “thin” (Miller and Wahlén 2015) and do not allow a richly textured understanding of complex development processes. Data collection on multiple indicators is more time and resource intensive, and it may be difficult to compile retrospective data necessary on all PPIs of interest in order to test their predictive capacity empirically, particularly given the paucity of information available on postintervention impacts. Given imperfect data and limited data collection resources, however, PPIs present a promising approach for advancing knowledge of longer-term impacts in the forestry sector and beyond.
Conclusion: Peering into Forest Futures
The future will always be clouded with some uncertainty. Obscurity increases as longer time periods are considered. Given the long period of time it often takes for their impacts to materialize, the challenge forest-related interventions face is to develop better ways to understand possible forest futures in the near term. Our review suggests we are some distance from effectively meeting this need and that finding any kind of “crystal ball for forests” is a daunting task. Yet the diverse literature we have surveyed here does provide some insights into promising future directions. Even as the bulk of the literature looking at the social-ecological impacts of forest conservation and management does not explicitly consider longer-term results, there are important exceptions. We believe that laying out the problem of near-term needs for longer-term understanding of impacts in the forest sector, as we have done here, is an essential step in a more concerted effort to address this problem. Highlighting studies that have begun to grapple with this challenge also helps point the way ahead.
We argue that future research in this area should include a much more insistent focus on the nature of impacts over time. Such a focus should include a careful consideration of the nature of different types of impacts, their stability (or lack thereof) in time and space, their relationship and interaction with other kinds of impacts, how long they may be expected to last, and factors that shape their intensity and persistence over time. Generally, both critical and more applied approaches to impact assessment would be strengthened by more attention to the temporal dimensions of impacts. Enabled by advances in remote sensing and other technologies, recent research has often emphasized spatial aspects. It is time for more concerted efforts to understand temporal ones.
There is a particular need to document and analyze “project afterlives” alongside project development and implementation. Engagement with particular places over long periods of time and more historically oriented research can help in this. Recent work to revisit rural places years and even decades after they were originally studied (Rigg and Vandergeest 2012) and to remeasure development project impacts in the postintervention period (Butaine et al. 2017) is exemplary in the regard. Other promising avenues for gaining insights into long-term impacts include further development of a historical political ecology (D. Davis 2009; Mathevet et al. 2015; Offen 2004), application of the “pathways of influence” framework for policy learning (Bernstein and Cashore 2012; Cashore et al. 2016), and empirical validation of PPIs (Miller and Wahlén 2015). Finally, we find encouragement in the broader trend toward greater collaboration across the quantitative-qualitative evaluation divide to explore historical trajectories and identify mechanisms linking forest-related interventions to impacts over the long term, which may have some generalizability across different social-ecological contexts. These and other creative new approaches are needed to shed critical light on the implications of current forest policy and practice for a variety of possible forest futures.
ACKNOWLEDGMENTS
We thank Katia Nakamura and Rea Zaimi for excellent research assistance, and Katherine Manchester for inspiring the title. Conversations with Arun grawal, Mark Buntaine, Ben Cashore, Anders Jensen, and Anthony Waldron and comments from audiences at the 2015 FLARE (Forests and Livelihoods: Assessment, Research, and Engagement) network meeting in Paris and the 2016 American Association of Geographers Annual Meeting in San Francisco helped to shape our thinking and strengthen this manuscript. We thank Dan Brockington, Johan Oldekop, and three anonymous reviewers for their perceptive comments on an earlier draft. This research was supported by the Program on Forests (PROFOR), project no. 145206, and the USDA National Institute of Food and Agriculture, Hatch project no. 1009327.
NOTES
We note that our review concentrates on peer-reviewed publications, particularly from the past five years, and does not delve deeply into gray literature such as working papers and agency reports.
Data not available for Mexico and for the year 2016 on human well-being outcomes. Data drawn from McKinnon et al. (2016); Miteva et al. (2012); Puri and Dhody (2016); and authors’ searches. We used Google Scholar, Web of Science, and citations in known IE studies to identify the peer-reviewed articles to populate this figure, as well as Figures 3 and 4.
Data not available for Mexico and for the year 2016 on human well-being outcomes. Data drawn from McKinnon et al. (2016); Miteva et al. (2012); Puri and Dhody (2016); and authors’ searches. We note that Figure 3 does not include data from two large-sample cross-national IE studies: Joppa and Pfaff (2011) and Nelson and Chomitz (2011).
Data drawn from McKinnon et al. (2016); Miteva et al. (2012); Puri and Dhody (2016); and authors’ searches.
Newton et al. (2016) direct attention to the multiple ways in which the term “forest dependent” is used. Here we wish to indicate reliance on forests for basic livelihood needs.
It is worth pointing out that not all interventions in the forestry sector may seek to achieve objectives that require a long time horizon. For example, fire prevention may be a pressing stated objective even as forest health over the long term may be the ultimate goal. The near-term urgency of some objectives may help explain the focus of some impact studies.
REFERENCES
Adams, William M., and Chris Sandbrook. 2013. “Conservation, Evidence and Policy.” Oryx 47 (3): 329–335. doi:10.1017/S0030605312001470.
Agrawal, Arun. 2001. “Common Property Institutions and Sustainable Governance of Resources.” World Development 29 (10): 1649–1672.
Agrawal, Arun. 2014. “Matching and Mechanisms in Protected Area and Poverty Alleviation Research.” Proceedings of the National Academy of Sciences 111 (11): 3909–3910. doi:10.1073/pnas.1401327111.
Agrawal, Arun, and Catherine S. Benson. 2011. “Common Property Theory and Resource Governance Institutions: Strengthening Explanations of Multiple Outcomes.” Environmental Conservation 38 (2): 199–210. doi:10.1017/s0376892910000925.
Agrawal, Arun, and Ashwini Chhatre. 2006. “Explaining Success on the Commons: Community Forest Governance in the Indian Himalaya.” World Development 34 (1): 149–166.
Agrawal, Arun, and Ashwini Chhatre. 2011. “Against Mono-consequentialism: Multiple Outcomes and Their Drivers in Social-Ecological Systems.” Global Environmental Change 21 (1): 1–3.
Agrawal, Arun, Ashwini Chhatre, and Elisabeth R. Gerber. 2015. “Motivational Crowding in Sustainable Development Interventions.” American Political Science Review 109 (3): 470–487. doi:10.1017/S0003055415000209.
Alix-Garcia, Jennifer M., Elizabeth N. Shapiro, and Katharine R. E. Sims. 2012. “Forest Conservation and Slippage: Evidence from Mexico’s National Payments for Ecosystem Services Program.” Land Economics 88 (4): 613–638.
Andam, Kwaw S., Paul J. Ferraro, Alexander Pfaff, G. Arturo Sanchez-Azofeifa, and Juan A. Robalino. 2008. “Measuring the Effectiveness of Protected Area Networks in Reducing Deforestation.” Proceedings of the National Academy of Sciences 105 (42): 16089–16094. doi:10.1073/pnas.0800437105.
Andam, Kwaw S., Paul J. Ferraro, Katharine R. E. Sims, Andrew Healy, and Margaret B. Holland. 2010. “Protected Areas Reduced Poverty in Costa Rica and Thailand.” Proceedings of the National Academy of Sciences 107 (22): 9996–10001. doi:10.1073/pnas.0914177107.
Arriagada, Rodrigo A., Erin O. Sills, Paul J. Ferraro, and Subhrendu K. Pattanayak. 2015. “Do Payments Pay Off? Evidence from Participation in Costa Rica’s PES Program.” PLoS One 10 (7): e0131544. doi:10.1371/journal.pone.0131544.
Bauch, Simone C., Erin O. Sills, and Subhrendu K. Pattanayak. 2014. “Have We Managed to Integrate Conservation and Development? ICDP Impacts in the Brazilian Amazon.” World Development 64 (S1): S135–S148. doi:10.1016/j.worlddev.2014.03.009.
Baviskar, Amita. 2004. “Between Micro-politics and Administrative Imperatives: Decentralisation and the Watershed Mission in Madhya Pradesh, India.” The European Journal of Development Research 16 (1): 26–40. doi:10.1080/09578810410001688716.
Bell, Simon, and Stephen Morse. 2008. Sustainability Indicators: Measuring the Immeasurable? London: Earthscan.
Bernstein, Steven, and Benjamin Cashore. 2012. “Complex Global Governance and Domestic Policies: Four Pathways of Influence.” International Affairs 88 (3): 585–604. doi:10.1111/j.1468-2346.2012.01090.x.
Bowler, Diana E., Lisette M. Buyung-Ali, John R. Healey, Julia P. G. Jones, Teri M. Knight, and Andrew S. Pullin. 2011. “Does Community Forest Management Provide Global Environmental Benefits and Improve Local Welfare?” Frontiers in Ecology and the Environment 10 (1): 29–36. doi:10.1890/110040.
Brockington, Dan, and Rosaleen Duffy. 2010. “Capitalism and Conservation: The Production and Re-production of Biodiversity Conservation.” Antipode 42 (3): 469–484. doi:10.1111/j.1467-8330.2010.00760.x.
Buntaine, Mark T., Bradley C. Parks, and Benjamin P. Buch. 2017. “Aiming at the Wrong Targets: The Domestic Consequences of International Efforts to Build Institutions.” International Studies Quarterly. doi:10.1093/isq/sqx013.
Canavire-Bacarreza, Gustavo, and Merlin M. Hanauer. 2013. “Estimating the Impacts of Bolivia’s Protected Areas on Poverty.” World Development 41: 265–285. doi:10.1016/j.worlddev.2012.06.011.
Cashore, Ben, Sarah Lupberger, and Sébastien Jodoin. 2016. Protocol for Policy Learning through the Pathways of Influence. New Haven, CT: Program on Forest Policy and Governance, Yale University Press.
Chetty, Raj, John N. Friedman, and Jonah E. Rockoff. 2014. “Measuring the Impacts of Teachers II: Teacher Value-Added and Student Outcomes in Adulthood.” American Economic Review 104 (9): 2633–2679.
CIFOR (Center for International Forestry Research). 2016. “Evidence-Based Forestry.” http://www1.cifor.org/ebf/home.html (accessed 30 August 2016).
Clements, Tom, Seng Suon, David S. Wilkie, and E. J. Milner-Gulland. 2014. “Impacts of Protected Areas on Local Livelihoods in Cambodia.” World Development 64 (S1): S125–S134. doi:10.1016/j.worlddev.2014.03.008.
Corson, Catherine. 2011. “Territorialization, Enclosure and Neoliberalism: Non-state Influence in Struggles Over Madagascar’s Forests.” Journal of Peasant Studies 38 (4): 703–726. doi:10.1080/03066150.2011.607696.
Davis, Diana K. 2009. “Historical Political Ecology: On the Importance of Looking Back to Move Forward.” Geoforum 40 (3): 285–286. doi:10.1016/j.geoforum.2009.01.001.
Davis, Kevin E., Angelina Fisher, Benedict Kingsbury, and Sally E. Merry. 2012. Governance by Indicators: Global Power through Quantification and Rankings. Oxford: Oxford University Press.
Elgert, Laureen. 2015. “Global Governance and Sustainability Indicators: The Politics of Expert Knowledge.” In Handbook of Critical Policy Studies, ed. Frank Fischer, Douglas Torgerson, Anna Durnová, and Michael Orsini, 341–357. Cheltenham: Edward Elgar.
Ferguson, James. 1994. The Anti-Politics Machine: “Development,” Depoliticization, and Bureaucratic Power in Lesotho. Minneapolis: University of Minnesota Press.
Ferraro, Paul J. 2009. “Counterfactual Thinking and Impact Evaluation in Environmental Policy.” Environmental Program and Policy Evaluation: New Directions for Evaluation 7 (122): 75–84.
Ferraro, Paul J., and Merlin M. Hanauer. 2014. “Quantifying Causal Mechanisms to Determine How Protected Areas Affect Poverty through Changes in Ecosystem Services and Infrastructure.” Proceedings of the National Academy of Sciences 111 (11): 4332–4337. doi:10.1073/pnas.1307712111.
Ferraro, Paul J., Merlin M. Hanauer, and Katharine R. E. Sims. 2011. “Conditions Associated with Protected Area Success in Conservation and Poverty Reduction.” Proceedings of the National Academy of Sciences 108 (34): 13913–13918. doi:10.1073/pnas.1011529108.
Ferraro, Paul J., and Subhrendu K. Pattanayak. 2006. “Money for Nothing? A Call for Empirical Evaluation of Biodiversity Conservation Investments.” PLoS Biology 4 (4): e105. doi:10.1371/journal.pbio.0040105.
Ferraro, Paul J., and Robert L. Pressey. 2015. “Measuring the Difference Made by Conservation Initiatives: Protected Areas and Their Environmental and Social Impacts.” Philosophical Transactions of the Royal Society B: Biological Sciences 370 (1681). doi:10.1098/rstb.2014.0270.
Ford, Rebecca M., Nerida M. Anderson, Craig Nitschke, Lauren T. Bennett, and Kathryn J. H. Williams. 2017. “Psychological Values and Cues as a Basis for Developing Socially Relevant Criteria and Indicators for Forest Management.” Forest Policy and Economics 78: 141–150. doi:10.1016/j.forpol.2017.01.018.
Garrett, Rachael D., and Agnieszka E. Latawiec. 2015. “What Are Sustainability Indicators For?” In Sustainability Indicators in Practice, ed. Agnieszka E. Latawiec and Dorice Agol, 12–22. Berlin: De Gruyter.
Gerber, Julien-François. 2011. “Conflicts Over Industrial Tree Plantations in the South: Who, How and Why?” Global Environmental Change 21 (1): 165–176. doi:10.1016/j.gloenvcha.2010.09.005.
Gilmour, Don. 2016. Forty Years of Community-Based Forestry: A Review of Its Extent and Effectiveness. FAO Foresty Paper No. 76. Rome: Food and Agriculture Organization of the United Nations.
Goldman, Michael. 1997. “‘Customs in Common’: The Epistemic World of the Commons Scholars.” Theory and Society 26 (1): 1–37. doi:10.1023/a:1006803908149.
Grainger, Alan. 2012. “Forest Sustainability Indicator Systems as Procedural Policy Tools in Global Environmental Governance.” Global Environmental Change 22 (1): 147–160. doi:10.1016/j.gloenvcha.2011.09.001.
Hajjar, Reem, Johan A. Oldekop, Peter Cronkleton, Emily Etue, Peter Newton, Aaron J. M. Russel, Januarti Sinarra Tjajadi, Wen Zhou, and Arun Agrawal. 2016. “The Data Not Collected on Community Forestry.” Conservation Biology 30 (6): 1357–1362. doi:10.1111/cobi.12732.
Holmes, George, and Connor J. Cavanagh. 2016. “A Review of the Social Impacts of Neoliberal Conservation: Formations, Inequalities, Contestations.” Geoforum 75: 199–209. doi:10.1016/j.geoforum.2016.07.014.
Humphreys, David. 2009. “Discourse as Ideology: Neoliberalism and the Limits of International Forest Policy.” Forest Policy and Economics 11 (5–6): 319–325. doi:10.1016/j.forpol.2008.08.008.
IEG (Independent Evaluation Group). 2013. Managing Forest Resources for Sustainable Development: An Evaluation of World Bank Group Experience. Washington, DC: World Bank.
Igoe, Jim, and Dan Brockington. 2007. “Neoliberal Conservation: A Brief Introduction.” Conservation and Society 5 (4): 432–449.
Jayachandran, Seema, Joost de Laat, Eric F. Lambin, Charlotte Y. Stanton. Robin Audy, and Nancy E. Thomas. 2017. “Cash for Carbon: A Randomized Trial of Payments for Ecosystem Services to Reduce Deforestation.” Science 357 (6348): 267–273.
Jean, Neal, Marshall Burke, Michael Xie, W. Matthew Davis, David B. Lobell, and Stefano Ermon. 2016. “Combining Satellite Imagery and Machine Learning to Predict Poverty.” Science 353 (6301): 790–794. doi:10.1126/science.aaf7894.
Jerven, Morten. 2013. Poor Numbers: How We Are Misled by African Development Statistics and What To Do About It. Ithaca, NY: Cornell University Press.
Jesson, Jill, Lydia Matheson, and Fiona M. Lacey. 2011. Doing Your Literature Review: Traditional and Systematic Techniques. Los Angeles: SAGE.
Joppa, Lucas N., and Alexander Pfaff. 2011. “Global Protected Area Impacts.” Proceedings of the Royal Society of London B: Biological Sciences 278 (1712): 1633–1638.
Kröger, Markus. 2014. “The Political Economy of Global Tree Plantation Expansion: A Review.” Journal of Peasant Studies 41 (2): 235–261. doi:10.1080/03066150.2014.890596.
Lam, Christie, Saumik Paul, and Vengadeshvaran Sarma. 2016. “Reversal of Fortune? The Long-Term Effect of Conservation-Led Displacement in Nepal.” Oxford Development Studies 44 (4): 401–419. doi:10.1080/13600818.2016.1149158.
Leach, Melissa, and Ian Scoones. 2013. “Carbon Forestry in West Africa: The Politics of Models, Measures and Verification Processes.” Global Environmental Change 23 (5): 957–967. doi:10.1016/j.gloenvcha.2013.07.008.
Levin, Kelly, Benjamin Cashore, Steven Bernstein, and Graeme Auld. 2012. “Overcoming the Tragedy of Super Wicked Problems: Constraining Our Future Selves to Ameliorate Global Climate Change.” Policy Sciences 45 (2): 123–152. doi:10.1007/s11077-012-9151-0.
Li, Tania M. 2007. The Will to Improve: Governmentality, Development, and the Practice of Politics. Durham, NC: Duke University Press.
Locatelli, Bruno, Varinia Rojas, and Zenia Salinas. 2008. “Impacts of Payments for Environmental Services on Local Development in Northern Costa Rica: A Fuzzy Multi-criteria Analysis.” Forest Policy and Economics 10 (5): 275–285. doi:10.1016/j.forpol.2007.11.007.
Lu, Yonglong, Nebojsa Nakicenovic, Martin Visbeck, and Anne-Sophie Stevance. 2015. “Policy: Five Priorities for the UN Sustainable Development Goals.” Nature 520: 432–433. doi:10.1038/520432a.
Lund, Jens Friis, Eliezeri Sungusia, Mathew Bukhi Mabele, and Andreas Scheba. 2017. “Promising Change, Delivering Continuity: REDD+ as Conservation Fad.” World Development 89: 124–139. doi:10.1016/j.worlddev.2016.08.005.
Mathevet, Raphael, Nancy Lee Peluso, Alexandre Couespel, and Paul Robbins. 2015. “Using Historical Political Ecology to Understand the Present: Water, Reeds, and Biodiversity in the Camargue Biosphere Reserve, Southern France.” Ecology and Society 20 (4): 17. doi:10.5751/ES-07787-200417.
McKinnon, Madeleine C., Samantha H. Cheng, Samuel Dupre, Janet Edmond, Ruth Garside, Louise Glew, Margaret B. Holland, et al. 2016. “What Are the Effects of Nature Conservation on Human Well-Being? A Systematic Map of Empirical Evidence from Developing Countries.” Environmental Evidence 5 (1): 1–25. doi:10.1186/s13750-016-0058-7.
McNally, Catherine G., Emi Uchida, and Arthur J. Gold. 2011. “The Effect of a Protected Area on the Tradeoffs between Short-Run and Long-Run Benefits from Mangrove Ecosystems.” Proceedings of the National Academy of Sciences 108 (34): 13945–13950. doi:10.1073/pnas.1101825108.
Merry, Sally Engle. 2011. “Measuring the World: Indicators, Human Rights and Global Governance. With CA Comment by John M. Conley.” Current Anthropology 52 (S3): S83–S95. doi:10.1086/657241.
Miller, Daniel C. 2013. “Conservation Legacies: Governing Biodiversity and Livelihoods around the W National Parks of Benin and Niger.” PhD diss., University of Michigan.
Miller, Daniel C., and Catherine Benson Wahlén. 2015. Understanding Long-Term Impacts in the Forest Sector: Predictive Proxy Indicators. Washington, DC: Program on Forests (PROFOR).
Miranda, Juan José, Leonardo Corral, Allen Blackman, Gregory Asner, and Eirivelthon Lima. 2016. “Effects of Protected Areas on Forest Cover Change and Local Communities: Evidence from the Peruvian Amazon.” World Development 78: 288–307. doi:10.1016/j.worlddev.2015.10.026.
Miteva, Daniela A., Subhrendu K. Pattanayak, and Paul J. Ferraro. 2012. “Evaluation of Biodiversity Policy Instruments: What Works and What Doesn’t?” Oxford Review of Economic Policy 28 (1): 69–92. doi:10.1093/oxrep/grs009.
Myers, Bronwyn, Rohan Fisher, Sam Pickering, and Stephen Garnett. 2014. “Post-project Evaluation of the Sustainability of Development Project Outcomes: A Case Study in Eastern Indonesia.” Development in Practice 24 (3): 379–389. doi:10.1080/09614524.2014.899320.
Naughton-Treves, Lisa, Margaret Buck Holland, and Katrina Brandon. 2005. “The Role of Protected Areas in Conserving Biodiversity and Sustaining Local Livelihoods.” Annual Review of Environment and Resources 30 (1): 219–252. doi:10.1146/annurev.energy.30.050504.164507.
Nayak, Prateep K., and Fikret Berkes. 2008. “Politics of Co-optation: Community Forest Management Versus Joint Forest Management in Orissa, India.” Environmental Management 41 (5): 707–718. doi:10.1007/s00267-008-9088-4.
Nelson, Andrew, and Kenneth M. Chomitz. 2011. “Effectiveness of Strict vs. Multiple Use Protected Areas in Reducing Tropical Forest Fires: A Global Analysis Using Matching Methods.” PLoS ONE 6 (8): e22722.
Neumann, Roderick P. 1998. Imposing Wilderness: Struggles over Livelihood and Nature Preservation in Africa. Berkeley: University of California Press.
Newton, Peter, Daniel C. Miller, Mugabi Augustine Ateenyi Byenkya, and Arun Agrawal. 2016. “Who Are Forest-Dependent People? A Taxonomy to Aid Livelihood and Land Use Decision-Making in Forested Regions.” Land Use Policy 57: 388–395. doi:10.1016/j.landusepol.2016.05.032.
Nolte, Christoph, and Arun Agrawal. 2013. “Linking Management Effectiveness Indicators to Observed Effects of Protected Areas on Fire Occurrence in the Amazon Rainforest.” Conservation Biology 27 (1): 155–165. doi:10.1111/j.1523-1739.2012.01930.x.
Offen, Karl H. 2004. “Historical Political Ecology: An Introduction.” Historical Geography 32: 19–42.
Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge: Cambridge University Press.
Ostrom, Elinor. 2009. “A General Framework for Analyzing Sustainability of Social-Ecological Systems.” Science 325: 419–422. doi:10.1126/science.1172133.
Pagdee, Adcharaporn, Yeon-su Kim, and P. J. Daugherty. 2006. “What Makes Community Forest Management Successful: A Meta-Study From Community Forests Throughout the World.” Society and Natural Resources 19 (1): 33–52. doi:10.1080/08941920500323260.
Pawson, Ray. 2006. Evidence-Based Policy: A Realistic Perspective. London: SAGE.
Peluso, Nancy Lee. 1992. Rich Forests, Poor People: Resource Control and Resistance in Java. Berkeley: University of California Press.
Peluso, Nancy Lee. 1993. “Coercing Conservation? The Politics of State Resource Control.” Global Environmental Change 3 (2): 199–217.
Petrokofsky, Gillian, Peter Holmgren, and Nick D. Brown. 2011. “Reliable Forest Carbon Monitoring Systematic Reviews as a Tool for Validating the Knowledge Base.” International Forestry Review 13 (1): 56–66.
Pfaff, Alexander, Juan Robalino, Eirivelthon Lima, Catalina Sandoval, and Luis Diego Herrera. 2014. “Governance, Location and Avoided Deforestation from Protected Areas: Greater Restrictions Can Have Lower Impact, Due to Differences in Location.” World Development 55: 7–20. doi:10.1016/j.worlddev.2013.01.011.
Pullin, Andrew S., Mukdarut Bangpan, Sarah Dalrymple, Kelly Dickson, Neal R. Haddaway, John R. Healey, Hanan Hauari, et al. 2013. “Human Well-Being Impacts of Terrestrial Protected Areas.” Environmental Evidence 2 (1): 19. doi:10.1186/2047-2382-2-19.
Pullin, Andrew S., and Teri M. Knight. 2001. “Effectiveness in Conservation Practice: Pointers from Medicine and Public Health.” Conservation Biology 15 (1): 50–54. doi:10.1111/j.1523-1739.2001.99499.x.
Pullin, Andrew S., and Teri M. Knight. 2003. “Support for Decision Making in Conservation Practice: An Evidence-Based Approach.” Journal for Nature Conservation 11 (2): 83–90. doi:10.1078/1617-1381-00040.
Puri, Jyotsna, and Bharat Dhody. 2016. “Missing the Forests for the Trees? Assessing the Use of Impact Evaluations in Forestry Programmes.” In Sustainable Development and Disaster Risk Reduction, ed. I. Juha Uitto and Rajib Shaw, 227–245. Tokyo: Springer Japan.
Rangan, Haripriya. 1997. “Property vs. Control: The State and Forest Management in the Indian Himalaya.” Development and Change 28 (1): 71–94. doi:10.1111/1467-7660.00035.
Rasmussen, Laura Vang, Rosina Bierbaum, Johan A. Oldekop, and Arun Agrawal. 2017. “Bridging the Practitioner-Researcher Divide: Indicators to Track Environmental, Economic, and Sociocultural Sustainability of Agricultural Commodity Production.” Global Environmental Change 42: 33–46. doi:10.1016/j.gloenvcha.2016.12.001.
Rigg, Jonathan, and Peter Vandergeest, eds. 2012. Revisiting Rural Places: Pathways to Poverty and Prosperity in Southeast Asia. Honolulu: University of Hawaii Press.
Sackett, David L., William M. C. Rosenberg, J. A. M. Gray, R. Brian Haynes, and W. Scott Richard-son. 1996. “Evidence-Based Medicine: What It Is and What It Isn’t. It’s About Integrating Individual Clinical Expertise and the Best External Evidence.” British Medical Journal 312 (7023): 71–72.
Sandbrook, Chris, William M. Adams, Bram Büscher, and Bhaskar Vira. 2013. “Social Research and Biodiversity Conservation.” Conservation Biology 27 (6): 1487–1490. doi:10.1111/cobi.12141.
Sanderson, Ian. 2002. “Evaluation, Policy Learning and Evidence-Based Policy Making.” Public Administration 80 (1): 1–22. doi:10.1111/1467-9299.00292.
Scott, James C. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven, CT: Yale University Press.
Scullion, Jason J., Kristiina A. Vogt, Alison Sienkiewicz, Stephen J. Gmur, and Cristina Trujillo. 2014. “Assessing the Influence of Land-Cover Change and Conflicting Land-Use Authorizations on Ecosystem Conversion on the Forest Frontier of Madre de Dios.” Biological Conservation 171: 247–258. doi:10.1016/j.biocon.2014.01.036.
Shah, Payal, and Kathy Baylis. 2015. “Evaluating Heterogeneous Conservation Effects of Forest Protection in Indonesia.” PLoS ONE 10 (6): e0124872. doi:10.1371/journal.pone.0124872.
Sims, Katharine R. E. 2010. “Conservation and Development: Evidence from Thai Protected Areas.” Journal of Environmental Economics and Management 60 (2): 94–114. doi:10.1016/j.jeem.2010.05.003.
Somanathan, E., R. Prabhakar, and Bhupendra Singh Mehta. 2009. “Decentralization for Cost-Effective Conservation.” Proceedings of the National Academy of Sciences 106 (11): 4143–4147. doi:10.1073/pnas.0810049106.
Stem, Caroline, Richard Margoluis, Nick Salafsky, and Marcia Brown. 2005. “Monitoring and Evaluation in Conservation: a Review of Trends and Approaches.” Conservation Biology 19 (2): 295–309. doi:10.1111/j.1523-1739.2005.00594.x.
Strathern, Marilyn. 2000. Audit Cultures: Anthropological Studies in Accountability, Ethics, and the Academy. London: Routledge.
Sutherland, William J., Andrew S. Pullin, Paul M. Dolman, and Teri M. Knight. 2004. “The Need for Evidence-Based Conservation.” Trends in Ecology and Evolution 19 (6): 305–308. doi:10.1016/j.tree.2004.03.018.
UN (United Nations). 2008. “Millennium Development Goals Indicators.” http://mdgs.un.org/unsd/mdg (accessed 30 August 2016).
UN (United Nations). 2015. “Sustainable Development Goals.” https://sustainabledevelopment.un.org/sdgs (accessed 30 August 2016).
UN (United Nations). 2016. “IAEG-SDGs: Inter-agency Expert Group on SDG Indicators.” http://unstats.un.org/sdgs/iaeg-sdgs. (accessed 30 August 2016).
Vincent, Jeffrey R. 2016. “Impact Evaluation of Forest Conservation Programs: Benefit-Cost Analysis, Without the Economics.” Environmental and Resource Economics 63 (2): 395–408. doi:10.1007/s10640-015-9896-y.
Wahlén, Catherine B. 2014. “Constructing Conservation Impact: Understanding Monitoring and Evaluation in Conservation NGOs.” Conservation and Society 12 (1): 77–88. doi:10.4103/0972-4923.132133.
West, Paige, James Igoe, and Dan Brockington. 2006. “Parks and Peoples: The Social Impact of Protected Areas.” Annual Review of Anthropology 35 (1): 251–277. doi:10.1146/annurev.anthro.35.081705.123308.
Woolcock, Michael. 2013. “Using Case Studies to Explore the External Validity of ‘Complex’ Development Interventions.” Evaluation 19 (3): 229–248. doi:10.1177/1356389013495210.
World Bank. 2014. Results Framework and M&E Guidance Note. Washington, DC: World Bank.