0% found this document useful (0 votes)
7 views19 pages

Journal Level Metrics

The document discusses various journal-level metrics used to assess research quality and impact, emphasizing the importance of using multiple metrics for a comprehensive evaluation. Key metrics include CiteScore, SCImago Journal Rank (SJR), Source-normalized Impact per Paper (SNIP), h-index, g-index, and i10-index, each with distinct calculation methods and implications. Elsevier promotes responsible use of these metrics by advocating for the combination of qualitative and quantitative inputs in decision-making.

Uploaded by

suresh m
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views19 pages

Journal Level Metrics

The document discusses various journal-level metrics used to assess research quality and impact, emphasizing the importance of using multiple metrics for a comprehensive evaluation. Key metrics include CiteScore, SCImago Journal Rank (SJR), Source-normalized Impact per Paper (SNIP), h-index, g-index, and i10-index, each with distinct calculation methods and implications. Elsevier promotes responsible use of these metrics by advocating for the combination of qualitative and quantitative inputs in decision-making.

Uploaded by

suresh m
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Journal-level metrics

Journal-level metrics

• Research metrics are sometimes controversial, especially when in popular


usage they become proxies for multidimensional concepts such as research
quality or impact. Each metric may offer a different emphasis based on its
underlying data source, method of calculation, or context of use.
• For this reason, Elsevier promotes the responsible use of research metrics
encapsulated in two “golden rules”.
• Those are: always use both qualitative and quantitative input for decisions (i.e.
expert opinion alongside metrics), and always use more than one research
metric as the quantitative input.
• second rule acknowledges that performance cannot be expressed by any single
metric, as well as the fact that all metrics have specific strengths and
weaknesses.
• multiple complementary metrics can help to provide a more complete picture
and reflect different aspects of research productivity and impact in the final
assessment.
CiteScore metrics

• CiteScore metrics are a suite of indicators calculated from data in


Scopus, the world’s leading abstract and citation database of peer-
reviewed literature.
• Calculating the CiteScore is based on the number of citations to
documents (articles, reviews, conference papers, book chapters, and
data papers) by a journal over four years, divided by the number of the
same document types indexed in Scopus and published in those same
four years.
• CiteScore is calculated for the current year on a monthly basis until it is
fixed as a permanent value in May the following year, permitting a real-
time view on how the metric builds as citations accrue.
• Once fixed, the other CiteScore metrics are also computed and
contextualise this score with rankings and other indicators to allow
comparison.
• CiteScore metrics are:

• Current: A monthly CiteScore Tracker keeps you up-to-date about latest


progression towards the next annual value, which makes next CiteScore more
predictable.
• Comprehensive: Based on Scopus, the leading scientific citation database.
• Clear: Values are transparent and reproducible to individual articles in Scopus.
• The scores and underlying data for nearly 26,000 active journals, book series
and conference proceedings are freely available at www.scopus.com/sources or
via a widget (available on each source page on Scopus.com) or the Scopus API.
SCImago Journal Rank (SJR)

• The SCImago Journal Rank (SJR) indicator is a measure of the scientific influence of
scholarly journals that accounts for both the number of citations received by a journal
and the importance or prestige of the journals where the citations come from.
• A journal's SJR indicator is a numeric value representing the average number of
weighted citations received during a selected year per document published in that
journal during the previous three years, as indexed by Scopus.
• Higher SJR indicator values are meant to indicate greater journal prestige. SJR is
developed by the Scimago Lab,originated from a research group at University of
Granada.
• The SJR indicator is a variant of the eigenvector centrality measure used in network
theory. Such measures establish the importance of a node in a network based on the
principle that connections to high-scoring nodes contribute more to the score of the
node.
SCImago Journal Rank (SJR)

• The SJR indicator has been developed to be used in extremely large and
heterogeneous journal citation networks.
• It is a size-independent indicator and its values order journals by their
"average prestige per article" and can be used for journal comparisons in
science evaluation processes.
• The SJR indicator is a free journal metric inspired by, and using an algorithm
similar to, PageRank.
• Like CiteScore, SJR accounts for journal size by averaging
across recent publications and is calculated annually.
• SJR is also powered by Scopus data and is freely available
alongside CiteScore at www.scopus.com/sources.
SNIP: Source-normalized Impact per Paper

• Source-normalized Impact per Paper (SNIP) is a field


normalized assessment of journal impact.
• SNIP scores are the ratio of a source's average citation count and
'citation potential’.
• Citation potential is measured as the number of citations that a
journal would be expected to receive for its subject field.
• Essentially, the longer the reference list of a citing publication, the
lower the value of a citation originating from that publication.
• SNIP therefore allows for direct comparison between fields of
research with different publication and citation practices.
Source Normalized Impact per Paper
(SNIP)
• The Scopus database is the source of data used to calculate SNIP
scores. SNIP is calculated as the number of citations given in the present
year to publications in the past three years divided by the total number of
publications in the past three years.
• A journal with a SNIP of 1.0 has the median (not mean) number of citations
for journals in that field.
• SNIP only considers for peer reviewed articles, conference papers and
reviews.
• SNIP scores are available from the two databases listed
below: CWTS Journal Indicators and Scopus.
• SNIP is calculated annually from Scopus data and is freely available alongside
CiteScore and SJR at www.scopus.com/sources.
IPP - Impact Per Publication
• IPP - Impact Per Publication: Also known as RIP (raw impact
per publication), the IPP is used to calculate SNIP. IPP is a
number of current-year citations to papers from the previous 3
years, divided by the total number of papers in those 3 previous
years.
• Eigen factor: The number of times, in the past five years, that
articles from a journal have been cited in the Journal Citation
Reports (JCR). The Eigenfactor score considers which journals
have contributed these citations and removes journal self-
citations
H-index
• h-index, proposed by J.E. Hirsch in a 2005 article, is the most
widely used research metric.
• It measures the productivity and impact of an author's scholarly
output. Tools for calculating your h-index include Web of
Science and Google Scholar.
• h-index = h has at least h papers that have been cited h times.
• For example, a researcher with an h-index of 22 has 22 papers that
have been cited at least 22 times.
• Advantages of the h-index
• The h-index looks at the cumulative impact of an author's
scholarly output and performance.
• It measures quantity with quality by comparing publications to
citations.
• Several resources, such as Web of Science and Google
Scholar, automatically calculate the h-index as a part of citation
reports for authors.
• Disadvantages of the h-index
• h-index does not account for the number of authors per article;
articles with many authors count the same as articles with just few
or one. Nor does it account for the placement of the author.
• The metric is biased against "early-career" researchers that have
fewer publications.
• Self-citations can skew results.
• Find an Individual Author's h-index Using the Citation Analysi
s Report in Web of Science
• Find an Individual Author's h-index Using the Author Profile i
n Scopus
• Find an Individual h-index Using Publish or Perish
G-index
G-index
• g-index, proposed by Leo Egghe in his paper Theory and
practice of the g-index, 2006, as an improvement to the h-index.
The g-index gives more weight to highly-cited articles.
• To calculate the g-index:
• "[Given a set of articles] ranked in decreasing order of the
number of citations that they received, the g-index is the
(unique) largest number such that the top g articles received
(together) at least g² citations."
• Advantages of the g-index
• Accounts for the performance of author's top articles.
• Helps to make more apparent the difference between authors'
respective impacts.
• The inflated vales of the g-index help to give credit to lowly-cited or
non-cited papers while giving credit for highly cited papers.
• Disadvantages of the g-index
• Introduced in 2007, and the debate continues whether the g-index
is superior to the h-index.
• It is not as widely accepted as the h-index.
i10-index
• The i10-index, a metric used by Google Scholar, is the
number of publications with at least 10 citations for all of the
citations listed in your profile.
• This is a very simple metric to calculate but it is only available in
Google Scholar.
• In the example below, the Google Scholar profile page shows
both the h-index and i10-index.

You might also like