Tags

, , , , ,


I have been trying to collate all the different ways citations can be used to measure the performance of journals, authors & institutions with a short description to clarify in my mind how they work and what the differences between them are. I don’t claim that this is comprehensive, but thought I would share in case anyone else finds it useful. Please comment and let me know about any I’ve missed.

Overview

Journal articles cite previous work that their piece builds on, and are then cited themselves. This means that:

  1. Citations can be used as a measure of quality or the impact of that article
  2. These can be aggregated to look at journals, authors and institutions

From an article you can measure:

  • Number of citations,
  • citations per year,
  • percentage documents cited/uncited,
  • citations compared to other articles published within that journal and/or subject area.

Any citation measure needs to be looked at in context:

  • different subject areas have very different publishing norms which will affect their figures due to number of publications, multi-authors, numbers of citations and longevity of citations.
  • researchers at different stages of their careers will have very different profiles

The REF will use citation data in some panels, but “recognise[s] the limited value of citation data for recently published outputs, the variable citation patterns for different fields of research, the possibility of ‘negative citations’, and the limitations of such data for outputs in languages other than English.” (from guidance on submission http://www.ref.ac.uk/pubs/2011-02/)

Also, the use of social media is increasingly being looked at as a measure of impact, so the use of tools like Mendeley, CiteULike, Connotea, Twitter, links in Wikipedia, Facebook and Google shares are all being monitored and collated in different altmetric tools. This measures the impact to the wider public, rather than within the academic community.

Journals

Analysing citations to documents published in a specific journal allows you to compare the impact of research published in different journals. For different subject areas this needs to be measured over different timescales as the speed with which new knowledge is assimilated into the general corpus is different.

Impact Factor (available in Journal Citation Reports)

An impact factor is a measure of the impact a particular journal has had in a specified year. It is calculated by:

Impact Factor for one year is the number of citations to articles published in that journal during the two previous years divided by the number of “citable” items published in that journal during the two previous years.

SCImago Journal Rank (available in Scopus)

Based on Google PageRank – It expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years, –i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3. Citations are weighted depending on the perceived quality of the journal that is citing.

SNIP (Source Normalised Impact per Paper) (available in Scopus)

The SNIP indicator uses a source normalized approach to correct for differences in citation practices between scientific fields. Measures contextual citation impact by weighting citations based on the total number of citations in a subject field.

Eigenfactor (Eigenfactor.org)

A journal’s Eigenfactor score is our measure of the journal’s total importance to the scientific community. It uses the structure of the entire network (instead of purely local citation information) to evaluate the importance of each journal.

With all else equal, a journal’s Eigenfactor score doubles when it doubles in size. Thus a very large journal such as the Journal of Biological Chemistry which publishes more than 6,000 articles annually, will have extremely high Eigenfactor scores simply based upon its size.

Eigenfactor scores are scaled so that the sum of the Eigenfactor scores of all journals listed in Thomson’s Journal Citation Reports (JCR) is 100. In 2006, the journal Nature has the highest Eigenfactor score, with a score of 1.992. The top thousand journals, as ranked by Eigenfactor score, all have Eigenfactor scores above 0.01.

Article Influence Score (Eigenfactor.org)

A journal’s Article Influence score is a measure of the average influence of each of its articles over the first five years after publication. As such, it is comparable to Thomson Scientific’s Impact Factor. Article Influence scores are normalized so that the mean article in the entire Thomson Journal Citation Reports (JCR) database has an article influence of 1.00.

In 2006, the top journal by Article Influence score is Annual Reviews of Immunology, with an article influence of 27.454. This means that the average article in that journal has twenty seven times the influence of the mean journal in the JCR.

Authors

Looking at all papers published by an individual can give you an overview of their research as a whole.

h-index

An individual has an index h when h of the total number of papers he has published have at least h citations each, and the rest of his published papers have a maximum of h citations.

This combines an assessment of both quantity (number of papers) and quality (impact, or citations to these papers)

The h-index can be normalised by the number of authors on a paper & by time since publication to allow better comparison between authors at different points of their careers and across different subject areas.

g-index

Given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (cumulatively) at least g2 citations. This gives more weight to highly cited articles and takes into account increasing citations to the highly cited articles.

m-index

h-index divided by the number of years since their first publication – useful for comparing researchers at different points in their careers.

Ptop10%

The number of publications in the top 10% of publications in their field per year. This is much easier to compare across subject areas as it is relative within the subject.

i10- index

The number of papers that have at least 10 citations.

Journal Actual/ Journal Expected

Ratio of the actual citation count (of a paper) to the expected count of papers published in same journal, year and document type, overall for all articles by that author.

Category Actual/ Category Expected

Ratio of the actual citation count (of a paper) to the expected count for papers from same category , year and document type, overall for all articles by that author.

Percentile in Field

Number of publications with citations within different percentiles relative to records of same document type, from same category, published in the same year. Most cited paper awarded lowest percentile (0%) and least to non-cited awarded highest percentile (100%). An average author would have 1% of articles in the top 1%; 5% in the top 5%; etc and their average percentile would be 50%.

Institutions

  • InCites Institutional Comparison measures:
    • Web of Science documents (& % with international collaboration)
    • Impact Relative to Subject Area (average cites of an institution in a subject area compared to the expected impact in the subject area)
    • Impact Relative to Institution (average cites of papers in a field compared to the average cites overall for the institution)
    • % Documents in Subject Area (market share)
    • % Documents in Institution
    • % Documents Cited Relative to Subject Area
    • % Documents Cited to Relative to Institution
    • Aggregate Performance Indicator: this metric normalises for period, document type and subject area and is a useful indicator to compare institutions of different age, size and subject focus.
  • SciMago Institution ranking looks at:
    • IC::International Collaboration: Institution’s output ratio produced in collaboration with foreign institutions. The values are computed by analyzing an institution’s output whose affiliations include more than one country address.
    • NI::Normalized Impact: The values (in %) show the relationship between an institution’s average scientific impact and the world average set to a score of 1, –i.e. a NI score of 0.8 means the institution is cited 20% below world average and 1.3 means the institution is cited 30% above average.
    • Q1::High Quality Publications: Ratio of publications that an institution publishes in the most influential scholarly journals of the world; those ranked in the first quartile (25%) in their categories as ordered by SCImago Journal Rank SJR indicator.
    • Spec::Specialization Index: The Specialization Index indicates the extent of thematic concentration /dispersion of an institution’s scientific output. Values range between 0 to 1, indicating generalistic vs. specialized institutions respectively. This indicator is computed according to the Gini Index used in Economy.
    • Exc::Excellence Rate: Exc indicates the amount (in %) of an institution’s scientific output that is included into the set of the 10% of the most cited papers in their respective scientific fields. It is a measure of high quality output of research institutions.
    • Leadership::Scientific Leadership: Leadership indicates an institution’s “output as main contributor”, that is the number of papers in which the corresponding author belongs to the institution.
    • Scholarly output – Number of documents & number of documents per FTE
    • Citation Count – Number of citations, citations per FTE & citations per document
    • H-index per discipline
    • Field-weighted citation impact – ratio of total citations actually received to the total citations that would be expected based on the average of the subject field.
    • Outputs in top percentiles – number of documents, % of documents and number of documents per FTE in the top 1%, 5%, 10%, 25% of outputs over a rolling 3 year block.
    • Collaboration – Volume and proportion of nationally and internationally co-authored scholarly outputs – number of outputs, % of outputs in that category & number of documents per FTE.
  • Snowball metrics on Output:
    • Scholarly output – Number of documents & number of documents per FTE
    • Citation Count – Number of citations, citations per FTE & citations per document
    • H-index per discipline
    • Field-weighted citation impact – ratio of total citations actually received to the total citations that would be expected based on the average of the subject field.
    • Outputs in top percentiles – number of documents, % of documents and number of documents per FTE in the top 1%, 5%, 10%, 25% of outputs over a rolling 3 year block.
    • Collaboration – Volume and proportion of nationally and internationally co-authored scholarly outputs – number of outputs, % of outputs in that category & number of documents per FTE.

Altmetrics

Altmetrics is the field of using social media to assess the impact of an article in the wider public, rather than within the academic communities as is measured by more traditional citation metrics. Also, by looking at these tools feedback can be given to researchers quicker than through citations on academic articles due to the turn around times of publishing through the different mediums.

There are a number of different tools being developed in this area that look at information from places like CiteULike, Mendeley, Facebook, Twitter, Delicious, F100, slideshare, Wikipedia, PubMed and CrossRef etc.

Some tools are:  Altmetric.com; ImpactStory; ReaderMeter (broken?); ScienceCard; PaperCritic (broken?); Plum Analytics

Advertisements