Metrics and their alter-egos, at #OAI8

Impact Factor must make room for other measures, but which?

From 19 to 21 June, the CERN Workshop on Innovations in Scholarly Communications (OAI8) is being held in Geneva. The first day’s session on research metrics provided a useful overview of new types of measures and the sources of data that ought to be considered for the evaluation of research today.

From 19 to 21 June, the CERN Workshop on Innovations in Scholarly Communications (OAI8) is being held in Geneva. The first day’s session on research metrics provided a useful overview of new types of measures and the sources of data that ought to be considered for the evaluation of research today.

This article also exists in French ("Les indicateurs bibliométriques et leurs alter-égos, retour sur l’atelier #OAI8 du CERN"), translated by Timothée Froelich.

(Credit: Flickr/sickmouthy)

Right now in Geneva, the 2013 edition of the CERN Workshop on Innovations in Scholarly Communications (OAI8) is taking place. With sessions on gold open access, data and document semantics, and the arts, humanities and social sciences, a variety of subjects will be covered. Yesterday’s opening day for the event featured an interesting plenary session on an always-hot topic: metrics. Speakers Johan Bollen, Euan Adie and Jelte Wicherts discussed the up-and-coming ways of evaluating researchers and their products, along with the promise and pitfalls of each. Thanks to the live-tweet (#OAI8) provided by attendees, the rest of us were able to follow along. Below is an overview of the thoughts on metrics to come out of the session.

 

Johan Bollen (@jlbollen), associate professor at the Indiana University School of Informatics and Computing, opened the session by discussing the notion of researcher “impact”. It is often said, and Bollen agreed, that scholars usually have an idea already about which researchers, journals, etc. are important in their field. “Impact exists in the scholarly community,” Ellen Collins of the Research Information Network tweeted, “and it is NOT the same thing that is measured by the impact factor.” How, then, to measure the real, yet nebulous, influence of scientists, their articles, and the journals they publish in? Through different avenues, Collins reported: ask scientists directly via surveys and by way of awards; measure correlates of impact (funding, publications, citations); and consider data that reveals their behavior online.

Such online data comes in many forms, and probably represents just as many communities in the cycle of research production and consumption. This data can be transformed into a range of measures, Ellen Collins shared. Those that reflect the real, online activities of the scientific community (readership, downloads, Twitter mentions, etc.) “may become the most accurate indicators of impact,” Peter Morgan tweeted. “Citations may be just a public display of (usually) approval; behavioural data imply what a researcher actually does.” Interestingly, Johan Bollen showed a graph comparing journals’ impact factor with page rank values: the correlation is weak.

 

One correlation that does seem to stand up to the test is the relationship between Twitter mentions and article downloads, a fact that was retweeted vigorously by participants. Large-scale usage makes a very interesting measure of research impact, and different projects, like MESUR and COUNTER are attempting to define it better and standardize the way it is calculated.

But faced with a multitude of such altmetrics, it’s important to be wary. They may measure different forms of impact and some will be more accurate than others. Furthermore, the second speaker of the session, Euan Adie, founder of altmetric.com, stressed that attention is different from impact, making the important distinction between data collection – like counting Twitter mentions – and metrics. “There are no good altmetrics metrics yet,” posted Herbert Van de Sompel of the Research Library at the Los Alamos National Laboratory.

In the end, “altmetrics can’t be a replacement for citation counts,” Ellen Collins tweeted, but they are useful for measuring quality and making people talk about your content. Kara Jones, a research librarian at the University of Bath, shared the thought that “so many metrics end up influencing what they measure – does this mean that metrics will constantly evolve?” No doubt they will, along with the evolving behavior of more and more researchers who are taking their work online.

 

Find out more:

"How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations", Johan Bollen, et al., PLOS ONE
http://www.mysciencework.com/en/publications/show/342626/span-class-highlight-sentence-em-how-the-scientific-em-em-community-em-em-reacts-to-newly-em-em-submitted-em-em-preprints-articl

"Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact", Gunther Eysenback, Journal of Medical Internet Research

COUNTER - Counting Online Usage of Networked Electronic Resources
http://projectcounter.org/

MESUR - Studying science from large-scale usage data
http://mesur.informatics.indiana.edu/?page_id=2

"The Map Equation" looking at networks in science
http://arxiv.org/abs/0906.1405