An institutional repository is a tool for research managers to monitor, manage and assess research activities within an institution. If the repository is collecting all research outputs, institutional managers can develop procedures for using the repository to answer questions such as:
* What is being published from this institution?
* Who is producing what?
* Where it is being published / performed / installed?
* How much impact it is having (by measuring citations and other things)?
* Where are the upward trends?
* Where are the downward trends?
* How much collaborative work is being done?
* With whom?
* In which other institutions?
* What results are we getting for the money we put into our physics department?
* Are our strongest research departments attracting the right numbers of students?
As all universities establish their own digital repositories, institutional managers will be able to use the growing worldwide Open Access corpus to help answer other types of question, too, such as:
* How does our impact compare to theirs?
* Is our chemistry department producing better research than theirs?
* How does our return on investment look in comparison to theirs?
* Are our collaborative research programmes doing well?
* Is our research competitive in comparison to our closest competitor institutions?
Measuring repository usage
Software is available that measures usage of a repository - visits, views and downloads of each item. Usage figures are collected in a database and users can request usage statistics for an individual article or, say, all articles published by a research group, in numbers or in graphical form. To view a 'live' example of this kind of graphical display, please click
here. Authors are enthusiastic about being able to see the level of usage of their articles and where the usage comes from and this encourages them to deposit their work. Research managers can use such data to help inform their planning and investment activities.
Measuring research impact
It is possible to measure not just usage but also impact in the form of citations to articles in a repository. Software like Citebase does this. At the moment, Citebase is working only on the high energy physics open access database, arXiv, because there is enough material in that database (over half a million articles) to permit meaningful measurement of citations between articles. As the Open Access corpus grows worldwide in institutional repositories, Citebase and other tools like it will be able to work effectively on the whole scholarly literature.
Research assessment
National-level research assessment exercises are now well-established in the Uk and Australia and other countries are looking at introducing similar procedures. In Australia, university repositories have been used as the required locus for material to be included in the assessment exercise. In the UK this has not been a formal reequirement up until now, but future exercises will be 'metrics-based' and the use of institutional repositories to collect together the assessable amterial is very likely. Some universities have used the 2008 UK national Research Assessment Exercise to prepare for this, putting into place procedures for gathering the research output of the institution into the repository in a way that suits the requirements of the Assessment Exercise and smooths the workflow within the institution. An account of how this process was tackled at the University of Southampton is
here.
Research metrics
As national research assessment exercises take root (the prime examples currently are in the UK and Australia), there is an increasing focus on the development of metrics that can measure aspects of research activity. The only metric that has been in widespread use hitherto is the Journal Impact Factor (JIF), developed by the Institute for Scientific Information, ISI (now part of the Thomson Reuters group). The metric was originally developed as a measure of the impact of individual journals and was intended as a tool for publishers. It is calculated annually for all the journals covered by Thomson’s Web of Science (WoS) database, around 10,000 titles. The JIF prevailed over two decades because no other provider had the breadth and depth of content to create alternative, meaningful metrics for measuring anything else from the research literature. This is now changing, with other publishers developing rival databases. The most important development, though, is that of the growing Open Access literature. An Open Access global literature will provide bibliometricians around the world with the raw material to develop many new metrics for measuring and assessing research.
Citation analysis is just one way to analyse the research literature. Some other candidate metrics are:
· Citations
· “CiteRank” (likeGoogle)
· Co-citations
· Downloads
· Citation/download correlations
· Hub/authority indicators
· Chrono-indicators: latency, longevity
· Endogamy/exogamy patterns
· Book citation indices
· Links (in andout)
· Commentaries
· h-index (and its variants)
· Total co-authors
· Articles published
· Years of publication
· Semiometrics (latent semantic indexing, text-patterns etc.)
· Citation analysis
· Grants
· Number of research students
· Prizes, medals
(list courtesy of Stevan Harnad)
For an overview of the current situation with respect to developing research metrics,
click here.
For an overview of how research performance may be measured using an Open Access literature,
click here.
Citation analysis for books and monographs
Institutional Open Access policies are designed to collect mainly journal articles, the literature which academic authors write with no expectation of direct financial reward. As clear from the list above, citation analysis is just one tool for assessing research effort in a university but it is extremely useful. It is particularly appropriate for science and technology research, where researchers publish many papers and cite the work of others heavily, but less so for social science and especially the humanities where journal articles are not the predominant form of output. In these disciplines, monographs tend to be more important and citing behaviour is concomitantly very different. A repository in a typical university will therefore contain large numbers of journal articles but few full-text books. This means that unless the right policy has been put in place - one that requires the deposit of at least the metadata for books (author, title, institutional affiliation, abstract and references or bibliography) - citations to and from books and monographs will be lost.
Developing the technical infrastructure for research management
Many institutions have some form of institutional research management system (IRMS, sometimes called a CRIS – Current Research Information System) which draws together key information from all the main IT systems. For example, it will take data from the finance office for research income, information on staffing from the human resources database and details of postgraduate numbers from the student records system. The IRMS can be linked to the repository so that it can access all the bibliographic data and research outputs. Using the central repository in this way can lead to resource efficiencies across the institution. Without this arrangement the information about research outputs may otherwise need to be gathered from several individual departments or research groups.
The institution can also exploit the benefit of having bibliographic experts, often based in the library, checking the data that go into the repository. The quality assurance procedures of repository workflow provide the consistency and accuracy which is so important for research management and assessment. There will be decisions to make as to whether some functionality should best sit in the IRMS or the repository – for example, the ability to run reports that assign outputs to assessment units. But wherever these functions sit it is crucial that the repository is linked to the institutional records so that each output can be mapped to a unique author identifier for each author or co-author.
Additional resources
A Briefing Paper on using repositories for research management and assessment is available
here.
In, Digital Repositories Federation International Conference: Open Access and Institutional Repository in Asia-Pacific, Osaka, Japan, 30-31 Jan 2008. Japan, DRIFIC 2008 Steering Committee, 41-42.
10th International Symposium on Electronic Theses and Dissertations, Uppsala, Sweden, 13-16 June 2007, 6pp.
Hey, Jessie M N, White, Wendy, Simpson, Pauline, Brown, Mark and Lucas, Natasha (2006)
Fast flows the stream: tackling the workflow challenge with the University of Southampton Research Repository. At, Open Scholarship 2006: New Challenges for Open Access Repositories, Glasgow, UK, 18-20 October 2006.
Interim report (May 2008) of the ARROW-HERDC Working Group (details requirements and candidate models for the way repositories fit into research management and the national research reporting system in Australia)
Draft ERA submission guidelines. Australian Research Council, January 2009.