Performance metrics
Research is assessed on a number of criteria already and with the Web now providing the opportunity for the development of new tools and techniques for measuring 'things to do with research' the list of possible assessment criteria is growing. Of course, assessment of an individual's performance for, say, tenure will take into account things such as grants awarded, prizes and medals, patents, student mentoring, teaching duties, offices held, and other measures of contribution towards institutional life. Qualitative measures may include collaborations, some form of peer review, r esponsibilities and nowadays some of the so-called Web 2.0-enabled activities (social network indicators).
Bibliometrics
As well as these, there will be attention on bibliographic measures that reflect that individual's publications record - the evidence of his or her research output trail.
The Journal Impact factor (JIF)
Until recently, there was only really one such measure used ubiquitously and that is the Journal Impact Factor. Just the name of that measure serves to signal how absurd an application it is as a metric to measure the performance of an individual: it measures the impact of journals, not people, yet its significance in shaping publishing practices, directions of research, funding and academic careers cannot be overestimated. Its use has been widespread and it has been employed in the most misguided of ways, even at national research policy level.
The JIF is a metric developed by the Institute for Scientific Information (ISI, now part of Thomson Reuters) calculated through an algorithm based on the total number of citations that have been accrued by articles in a journal over a two-year period after publication. The total citations for that journal over the period of a given year are divided by the number of articles published in that year and the result is the JIF. This is published for around 9000 journals every year in the Journal Citation Index, an index eagerly awaited by publishers and journal editors and editorial boards because a culture of competing on JIFs has grown up in this community. To have a good JIF is considered a measure of success, despite all the flaws of the system and the opportunities for manipulation that exist.
And to have a good JIF - and there is no such thing as a straightforwardly good JIF, only a relatively good JIF since the measure is a measure or relativity - may be a fine thing for a journal and its publishers and editors. It is not a good thing for authors, since authors cannot have a JIF: it is a journal-based metric. Yet authors are measured, commonly and seriously, on the JIF of the journals in which they publish their work. The absurdity of this can be likened to awarding a candidate a university place on the basis of how many other students from his high school have been awarded places. It ignores the performance of the individual and rewards him or her on the basis of a collective measure.
New bibliometrics
The JIF was a metric developed in the era when journal existed only in print form and there was only one database large and comprehensive enough to use for such a calculation - the one assembled by ISI. ISI still produces the Journal Citation Index each year but other, new, bibliometrics are also emerging now that there are substantial bodies of literature held in collections elsewhere. The growing Open Access literature provides huge opportunities in this respect. If all research outputs are open to analysis, useful new measures can be developed encompassing not only research papers, but datasets and other types of output from research activity. This is an exciting area for future development. But even with a focus just on research articles there are many things that can be done to assess impact now that the Open Access literature is growing.
There are two main bases for developing measures for the research literature:
- usage-based metrics (data generated through measuring user activity)
- citation-based metrics (data generated by measuring author activity: the JIF is one metric in this category)
Usage metrics
Some examples of usage metrics are:
- COUNTER statistics: COUNTER is a publishing industry standard developed to ensure that the usage statistics provided by publishers to libraries for each journal to which the library subscribes are consistent and in a form where comparisons and analyses can be performed. COUNTER statistics measure at journal-level only at present, though article-level usage data may be available in future
- The usage-based measures developed by the MESUR Project carried out by the LANL laboratory. MESUR has defined and validated a range of usage metrics and produced guidelines and recommendations for their application
- Repository usage measures. These systems record usage of materials in Open Access repositories so that authors can see how much (and in some cases where) their articles are being downloaded. Examples include LogEc (which measures usage from the RePec economics Open Access collection, AWstats, and Google Analytics (which are general website-usage analysers) and Interoperable Repository Statistics (an open source program from the EPrints team at Southampton University specifically designed for measuring usage of Open Access repositories)
Citation metrics
Some examples of citation-analysis systems are:
- h-index: developed by Hirsch, this is a simple measure that relates an individual's published outputs to the number of citations they gather and computes them into a single metric
- g-index: developed by Egghe. Other modifications of the h-index also exist
-
eigenfactor, now used by Thomson Reuters as part of their author service offering
- Y-factor (developed by the LANL laboratory)
- Harzing's Publish-or-Perish service for measuring an individual's citation impact
-
CiteSeer, a program analysing citations to the Open Access computer science literature
-
CitEc, a program analysing citations to the Open Access economic literature
-
Citebase, a program that works on any body of Open Access literature but currently operating across the physics and cognitive science Open Access literature
The Open Access literature provides opportunities for the development of a much richer array of bibliometrics, too. Things such as indices of citation latency (how long citations continue to be made to articles), immediacy (how soon citations occur), decay index (the pattern of citations to an article), cited-by and co-citation measures and so forth will be tools that enable bibliometricians to explore the literature in new ways and gain greater understanding of how research is communicated, especially when coupled with semantic analysis technologies. This understanding will help to improve research communication in the future.
Research indicators in development
A number of large-scale projects are underway to study the potential for the development of new indicators in different areas of research. Some examples include:
The Humanities Indicators Project run by the National Humanities Alliance in the US
The European Educational Research Quality Indicators Project, funded by the European Union
The European Reference Index for the Humanities, funded by the European Science Foundation
See also:
Citation impact