Required rethink on what is evidence

Evidence-based approaches to sustainability challenges must draw on knowledge from the environment, development and health communities. To be practicable, this requires an approach to evidence that is broader and less hierarchical than the standards often applied within disciplines.
Published in Sustainability
Required rethink on what is evidence
Like

Heather Tallis and I lead science programs for The Nature Conservancy (TNC). Much of the work we and our teams do is gathering and interpreting evidence about interventions TNC is either considering or already implementing. Increasingly, these interventions are aimed at delivering outcomes that extend beyond conservation, to development or health. This is not because our organisation is changing its mission, rather its the result of a deepening awareness that the many linked challenges facing people and nature, need to be matched by cross-disciplinary solutions. And we’re clearly not alone in this awareness; a look at the UN Sustainable Development Goals illustrates nicely how intertwined the challenges facing people and nature are. Consider the use of fire to clear tropical peat forests; fire is an important tool for agricultural production, but it leads to significant carbon emissions, the loss of forests and associated biodiversity, and human respiratory illness and mortality linked to smoke. These are complex problems. The opportunity, or need, for shared solutions across the health, development and environment communities is clear.

Assessing the strength of evidence that implementing an intervention will result in a particular outcome is a critical step in our decision making about whether, when and where to pursue an intervention (or which of many, possibly untested, interventions to pursue). But work that transgressed disciplinary boundaries was challenging our understanding of evidence.  Each of the respective disciplines had developed their own approach to understanding evidence, based largely on the sort of studies typically available in their field. Uncertainty about how to reconcile these different views of evidence permeated the professional communities, taking both implementing groups (like us) and funders out of their comfort zone. This had become a major barrier to our work, and in all likelihood, to the sort of solutions needed to achieve the SDGs. Fortunately, folks at the David and Lucille Packard Foundation shared our belief that this was a barrier we needed to try and overcome, and they provided the resources for us to bring a group together to propose a solution.  

We were fortunate to be able to assemble a terrific set of people from disciplines, including environmental management, development economics, health, law, and philosophy, in a striking setting in Iceland. Heather, Lydia Olander (Duke University) and I thought we were going to develop a sort of synthetic evidence grading scheme that would be both reasonable and practical for the sort of cross-disciplinary interventions we needed to evaluate. But it was one of those workshops that didn’t go at all like we expected, and the diversity of views of evidence among the 14 people in the room rapidly disabused of our notional outcome. My personal understanding of evidence was fundamentally challenged, and ultimately changed, because of the discussions of this group.

We were in agreement that a results chain or causal chain should form the basis of evidence assessment, and in fact, some form of these is a common representation of causality in all the disciplines we considered. But beyond this, the way each of us approached evidence was substantially different. For instance, I had been operating with the implicit assumption that the results chains would be developed and then we would assess evidence in support of the linkages. Nancy Cartwright pointed out that the results chain was in itself a piece of evidence. If this were not the case, then a random collection of nodes and linkages would be an equally valid starting point.

Insights were laboured and hard won but eventually we came to appreciate that rather than another evidence grading scheme, what was needed was a broader and less hierarchical approach to evidence. The key breakthrough we had was recognising that despite the many difference in the ways each of us would assess evidence, there was a set of characteristics that we all agreed were indicative of stronger evidence, regardless of evidence type or discipline. These became the cross-disciplinary evidence principles we propose in our paper.

The final manuscript also reflects important input from the review process at Nature Sustainability. The reviewers highlighted to us that even with our cross-disciplinary view of evidence (a much broader view of evidence then most conceptualize) that we were still ultimately taking a somewhat narrow view of knowing, albeit one that dominated the disciplines we all worked in. Acknowledging the ontological and epistemological assumptions in our proposal required a great deal of precision in crafting the manuscript.

Our hope is that the foundational evidence principles presented in the paper will facilitate more effective and confident assessment of evidence on interventions operating across the domains of development, environment and health. And that this will give implementers, funders and policy makers the confidence needed to take on more cross-disciplinary solutions. The feedback on our proposed principles has been positive so far. We will be making immediate use of these principles at The Nature Conservancy, and see this cross-disciplinary approach to evidence as so critical that we helped found the Bridge Collaborative (together with PATH, IFPRI and Duke University), explicitly to advance a shared understanding of evidence across health, development and environment organisations. 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in