URE has its roots in knowledge utilization, which Backer (1991) describes as, “research, scholarly, and programmatic intervention activities aimed at increasing the use of knowledge to solve human problems (p. 226)” such as education, employment and healthcare. Importantly, the core components of the field—“evidence” and “use”—are broadly construed, incorporating a range of types of evidence, inclusive of research, and constituting varied forms and purposes as ‘use’, such as the categories of instrumental, conceptual, political/strategic, and symbolic commonly featured in evidence use scholarship (Weiss, 1979). The study of URE, is, at its core, focuses on understanding the formation of policy and practice and the role(s) evidence has in that process. In effect, this includes inquiry into how decisions are made; how research outputs reach decision-makers and how they respond to them; but also the rest of the research life-cycle. How are evidence bases created? What is funded, and how? Who in involved in setting research priorities, when and why; and in the production, interpretation and mobilization of knowledge?

Historically, the study of URE has come in and out of focus, peaking in the 1970s and early 1980s, a period during which many seminal works were produced and published in journals such as Knowledge: Creation, Diffusion, and Utilization, as well as in flagship journals such as Administrative Science Quarterly. The last 20 years have also seen the re-emergence of evidence use as a focus of inquiry; this time, explicitly recognizing the importance of the connections between how evidence is produced (through funding, research practices, partnerships and infrastructures), and how it is used (communicated, disseminated, received and responded to). In practical terms, this can be seen in the re-establishment of venues for scholarship, such as Evidence and Policy, the U.K’s What Works Centres, the U.S. Institute for Education Sciences’ investment in two knowledge utilization centers, and philanthropic support for the study of evidence use by the William T. Grant Foundation.

Alongside these practical investments, there has been a widening recognition that the core concerns of the URE community are shared by other scholars (Oliver and Boaz, 2019; Tseng, 2012; Tseng and Coburn, 2019). The study of the production of evidence has been generally undertaken under the broad umbrellas of science policy, research assessment or evaluation, and science and technology studies (see, e.g., Waltman et al., 2019; Ioannidis et al., 2015; Munafò et al., 2017; Jasanoff, 2004; Watermeyer, 2014), whereas the study of use has belonged to the more applied social sciences (e.g., health and education; see Boaz et al., 2019a; Bauer et al., 2015; Bransford et al., 2009; Finnigan and Daly, 2014). There are of course exceptions to this rule, such as the study of innovation and technology transfer within the STS community, and the engaged research movement (Nightingale, 1998; Holliman, 2019). Yet the community of scholars working on these problems have always recognized that there are links between the production and use of knowledge. For example, both camps recognize that research is more likely to be used/implemented/acted on if users are involved in setting research priorities (e.g., Thornicroft et al., 2002; Acworth, 2008; Martin, 2010; Nolan et al., 2007; Penuel et al., 2015; Van der Meulen, 1998). But until recently there have been few attempts to join these different disciplines in a way which satisfyingly exploits what we can learn from these different perspectives. There have been recent attempts to engage with this interdisciplinary community through conferences and institutes aiming to tackle the ‘emerging field’ of meta-science, research-on-research, and other variations on a similar theme (Oliver and Boaz, 2019).

Of course, the study of how evidence is produced and used is hardly ‘emerging’. Indeed, scholars and practitioners from fields as diverse as computer science to clinical medicine; from conservation to art history have made, and continue to make contributions to this field (see, e.g., Feyerabend, 1961; Mukerji, 2001; Parkhurst, 2017). The tendency to claim this as a new field means that we are both failing to learn from each other, and seeing this learning itself as an unimportant task. There is a danger that we will promise collaborators that there are new, quick fixes to the old, complex problems which are inherent in the relationship between evidence, policy and practice (see, e.g., French, 2018; Haskins, 2018). For funders in particular, the promise of a transformative new approach that will maximize research impact can feel too good an opportunity to miss. As a consequence, opportunities for thinking and learning across disciplines to tackle thorny problems are repeatedly lost. We have to be honest about what this means for the quality of scholarship in this area.

However, the diversity in the field is potentially a huge strength, bringing in theoretical perspectives, methods, approaches which help us to find new perspectives on the complex problem of evidence use. Yet there are also challenges. We have no way to describe this larger universe of funders, scholars, practitioners and policymakers; we struggle to find each other as a community, as such an inherently multi-disciplinary or inter-disciplinary field of study will rarely meet at conferences or virtually. We use different language to talk about what we do and how, and we promote our work in different spaces. We may replicate each other’s work, or solve the same problems over and over again, seldom realizing that we are working in parallel. This work is carried on in different geographies, disciplines, sectors and policy domains. But we have as yet no clear way to cross the boundaries.

Improving the use of evidence is one of the major policy and practice challenges of our age (Commission on Evidence-Based Policymaking, 2017). We have more data available to us, and researchers are better equipped and more outward facing than ever before (Boaz et al., 2019b). Populations hold their governments to account, and researchers are under pressure to demonstrate the impact and public value of what they do. Ethically, morally, and practically, we should, as a community, learn better from each other, and take our lessons back to our home disciplines. To do this, it is imperative that those working on this set of related problems learn from each other more effectively. A first step in this process is to identify the different scholarly communities working on this set of problems, and characterize them and their discourses.

In this paper, we share our exploration of a scholarly community that emerged from the leadership of a foundation, conducted as a means to preliminarily map the URE community, and identify opportunities for expansion and coordination amongst the wider community. Specifically, we sought to better understand the strengths and challenges facing the URE community by answering the following questions: To what extent does URE span traditional boundaries of research, practice, and policy? Of different practice/policy fields? Of different disciplines?

We acknowledge this work represents only a partial view of the larger URE space, as will be evident in our data below, but also believe it is illustrative of the challenges facing the larger scholarly community and can serve to foster broader discussion and debate about the future of URE work.


We approached the challenge of “mapping” the scholarly community through an iterative surveying approach between 2016–2018. The survey was designed for the express purposes of mapping the landscape of URE work by asking participants to:

  • – characterize their own discipline, role, and policy/practice sector,

  • – identify key scholars whose work has influenced their own perspectives and work,

  • – identify key references which have influenced their own URE work.

This yielded descriptive information about work happening in this space, as well as network data in which individuals and disciplines are linked via referential relationships.


Our sample consists of a community of scholars and practitioners affiliated with URE efforts, particularly those tied to the work of the William T. Grant (US) and Nuffield foundations (UK). As the authors of this report were linked through the William T. Grant Foundation’s Using Research Evidence initiative, we began by approaching those invited to their annual gathering in 2016 (n = 102). We recognize that, as an invited event, limits the initial sample in many ways. It over-represents the United States, as well as scholars funded by the foundation, which has a substantive focus on child education and welfare. Nonetheless, the event has grown from grantees to broader set of U.S. scholars, as well as international scholars, policy leaders, and practitioners across policy areas. Moreover, as far as we are aware, it is the only academic conference to focus specifically on the study of the use of research evidence, and is therefore a reasonable seed sample to begin with.

In order to grow from this initial set of participants, we used a snowball approach, using responses from the prompt to identify up to five individuals they would consult, either in person or through their work, about use of research evidence. We then invited those individuals to participate, achieving a total of 80 respondents of the 219 ultimately invited (39 from the 102 original sample, and 41 from the additional 117 identified through the snowball method). This approach helped us to better represent the URE landscape, as well as understand the potential scope of networks within a larger community; although, as indicated above, we as yet have only a partial understanding of who is working on this, and how.

Our sampling concluded in 2018 with the inaugural Transforming Evidence meeting, an international convening of scholars, practitioners and funders hosted by the Nuffield Foundation in London. This meeting had an explicit remit to cross disciplinary and sectoral boundaries. Participants were invited as leading scholars, funders, or practitioners with profile within, and knowledge of their communities. The survey was administered for the final time to this set (n = 54), and the data combined for a total of 134 participants.


We rely on three analytical approaches: descriptive statistics, social network analysis and bibliometric analysis. First, we employ a basic descriptive approach to consider the composition of the URE community and the varied ways in which scholars and practitioners identify with URE work. More specifically, we describe the community in terms of discipline and policy area, present ways in which work is funded, and examine the keywords scholars use to describe their work and the fields to which they contribute.

Second, we utilize social network analysis to preliminarily explore the interpersonal links within this community. Participants were asked to identify up to five names of individuals who were either leading scholars who influenced their own work and perspectives on evidence production and use, or were advocates for better use of evidence, or better study of evidence. These individuals were characterized either through their survey response or by the researchers based on their publicly available biographies, and then added to the network. If multiple respondents nominated the same individual, they all have ties to that person. The characteristics of respondents and nominees were collated and used to analyze the network in terms of disciplines.

To explore the resulting network, we use UCINet to generate basic descriptors of network cohesiveness-density (the proportion of ties reported of all possible ties among respondents) and fragmentation (proportion of pairs of respondents that are “unreachable” through existing ties within the network). However, our approach to data collection—both limiting respondents to five nominations, as well as the high level of missingness resulting from response rates—creates limitations for the use of these metrics, which, particularly the whole-network measures, are sensitive to incomplete data.

Because of these limitations, we additionally consider homophily, an indicator of the extent to which an individual tends to associate with others like them. In our case, we explore homophily of disciplines, examining the extent to which nominated individuals are within one’s own disciplinary space or not. These measures are not sensitive to missingness in the data as they are calculated from individuals’ own responses. We generated homophily statistics by creating egonetworks in UCINet for each respondent, calculating the percentage of same-discipline scholars in their network.

Finally, we asked all respondents to nominate up to five key references of scholarly works which had influenced their own perspective about and work on how evidence was produced and used. These were compiled and descriptively analyzed to identify patterns and trends in influential works in the field. These data complement the social network analysis and provide further insight into the extent to which the community is drawing on a shared knowledge base or a more fragmented one.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *