Measure for Measure
Those who fund UK research, including the public, should expect to know about the outputs of that work. However, as Allan Sudlow discovered, this is a complex and expensive activity that needs better co-ordination.
The UK does not currently have a national reporting infrastructure that brings together all the information on public and charity-funded research. At a high level, such a system would allow those who had access, to evaluate inputs (e.g. money, people, time) against outputs (e.g. publications, patents, data). No such unified system exists, which makes it impossible to look across different sources of funding in any detailed way to assess the impacts of the research at a national or international level. And this is ignoring the more difficult task of evaluating the longer term benefits of such research, for example identifying how investing research money, time and effort in biomedical research has led to improvements in human health.
Thatâ€™s not to say people arenâ€™t trying. In fact, a large number of people employed by organisations that receive public funding for research, e.g. universities, are working to bring together all the information on their instituteâ€™s research spend and outputs for reporting and evaluation. Similarly, the UK Government , UK Research Councils, research universities, institutes and a huge number of different charities, foundations and trusts, have invested in IT systems and people to do just the same. A big driver for much of this activity across UK universities is the forthcoming Research Excellence Framework (REF), which in 2014 will evaluate the research outputs and impacts arising from all government-funded higher education institutes across the UK.
Until fairly recently however, this investment in IT systems and people has not been co-ordinated. Thus, research organisations across the UK are at different levels of maturity in managing research information. Some larger organisations have invested in commercial systems such as ResearchFish. Others have developed in-house systems to facilitate the gathering of information, particularly smaller organisations with limited resources, many of whom still rely on storing data in spread sheets and preparing information by hand. This has inevitably resulted in duplication and increased costs due to inefficiency across the sector as a whole.
What's the measure? Copyright Photos.com.
So why isnâ€™t it all coalescing into a single dedicated system for research evaluation? Well aside from the many different motivations for developing research information systems, layer on to this the complexities of all the different stakeholder views. For example, beyond a simple agreement of â€śwe need to gather information on Xâ€ť, there then needs to be agreement on what exactly can and should be measured, how often, for how long, in what format and structure, etc, etc.
Having said all that, there are a range of projects and developments that are attempting to bring some coherence to the world of research reporting. Some of this is happening by default, as organisations begin to use the same IT systems, and some of it is being led top-down by UK Government projects such as Gateway to Research which attempt to provide some level of visibility and access to research information to people outside of the academic research community.
In a bottom-up approach, I am involved in a JISC-funded feasibility study called UK Research Information Shared Service: UKRISS. This project has examined the motivations and needs of those involved in research reporting alongside an analysis of the current landscape of research information systems and standards. Our aim is to define an approach (based on a common research information format called CERIF) to allow better research information sharing and benchmarking across different organisations which are already using different systems. A small attempt to tackle what remains a big challenge.