What lies behind the averages and significance of citation indicators in different disciplines?

B.S. Lancho-Barrantes, V.P. Guerrero-Bote, F. Moya-Anegón

Research output: Contribution to journalArticlepeer-review

Abstract

The limitations of citation-based indicators include a lack of coverage, no normalization with respect to the length of reference lists (with a potential bias in favour of reviews), and different citation habits. As a consequence, the distributions of the indicators are not comparable across different disciplines. Here we show that the most popular journal citation indicators used in quality assessment — the journal impact factors of Thomson Scientific and the scientific journal rankings of Scopus — are strongly correlated with the proportion of within-database references, and even more so with the number of within-database recent references per paper. No significant correlations were found with other bibliometric magnitudes. We anticipate that these results will be a starting point for more sophisticated indicator models that take this dependence into account, and for the design of strategies aimed at extending such bibliometric databases as Thomson Scientific’s Science Citation Index or Elsevier’s Scopus to improve their capacity to evaluate all sciences.
Original languageEnglish
Pages (from-to)371 - 382
Number of pages12
JournalJournal of Information Science
Volume36
Issue number3
DOIs
Publication statusPublished - 13 Apr 2010

Fingerprint

Dive into the research topics of 'What lies behind the averages and significance of citation indicators in different disciplines?'. Together they form a unique fingerprint.

Cite this