Jump to content

H-index: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Reverted good faith edits by Pen Rosso (talk): Remove POV not backed by sources. please obtain consensus on talk
Venrit (talk | contribs)
m Input data: Removed broken wikilink and updated capitalisation to be consistent with h-index
Line 31: Line 31:


== Input data ==
== Input data ==
The ''h''-index can be manually determined by using citation databases or using automatic tools. Subscription-based databases such as [[Scopus]] and the [[Web of Science]] provide automated calculators. From July 2011 [[Google]] have provided an automatically calculated ''h''-index and [[Author-level metrics#i10-index|''i''10-index]] within their own [[Google Scholar]] profile.<ref>[https://scholar.google.com/intl/en/scholar/citations.html Google Scholar Citations Help], retrieved 2012-09-18.</ref> In addition, specific databases, such as the [[INSPIRE-HEP]] database can automatically calculate the ''h''-index for researchers working in [[high energy physics]].
The ''h''-index can be manually determined by using citation databases or using automatic tools. Subscription-based databases such as [[Scopus]] and the [[Web of Science]] provide automated calculators. From July 2011 [[Google]] have provided an automatically calculated ''h''-index and ''i10''-index within their own [[Google Scholar]] profile.<ref>[https://scholar.google.com/intl/en/scholar/citations.html Google Scholar Citations Help], retrieved 2012-09-18.</ref> In addition, specific databases, such as the [[INSPIRE-HEP]] database can automatically calculate the ''h''-index for researchers working in [[high energy physics]].


Each database is likely to produce a different ''h'' for the same scholar, because of different coverage.<ref>{{Cite journal | doi = 10.1007/s11192-008-0216-y| title = Which ''h''-index? – A comparison of WoS, Scopus and Google Scholar| journal = Scientometrics| volume = 74| issue = 2| pages = 257–71| year = 2007| last1 = Bar-Ilan | first1 = J.| s2cid = 29641074|author1-link= Judit Bar-Ilan }}</ref> A detailed study showed that the Web of Science has strong coverage of journal publications, but poor coverage of high impact conferences. Scopus has better coverage of conferences, but poor coverage of publications prior to 1996; Google Scholar has the best coverage of conferences and most journals (though not all), but like Scopus has limited coverage of pre-1990 publications.<ref name="Meho2007">{{cite journal |last=Meho |first=L. I. |author2=Yang, K. |year=2007 |title=Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar |journal=[[Journal of the American Society for Information Science and Technology]] |volume=58 |issue=13 |pages=2105–25 |doi=10.1002/asi.20677 }}</ref><ref name=Meho&Yang2006>{{Cite arXiv |date=23 December 2006 |author1=Meho, L. I. |author2=Yang, K |title=A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar |eprint=cs/0612132 }} (preprint of paper published as 'Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar', in ''Journal of the American Society for Information Science and Technology'', Vol. '''58''', No. 13, 2007, 2105–25)</ref> The exclusion of conference proceedings papers is a particular problem for scholars in [[computer science]], where conference proceedings are considered an important part of the literature.<ref>{{Cite journal |title=Research Evaluation for Computer Science |journal=[[Communications of the ACM]] |volume=52 |issue=4 |year=2009 |pages=31–34 |last1=Meyer |first1=Bertrand |author1-link=Bertrand Meyer |last2=Choppy |first2=Christine |last3=Staunstrup |first3=Jørgen |last4=Van Leeuwen |first4=Jan |author4-link=Jan van Leeuwen |doi=10.1145/1498765.1498780 |s2cid=8625066 |url=http://www.informatics-europe.org/docs/research-eval.php }}.</ref> Google Scholar has been criticized for producing "phantom citations," including [[gray literature]] in its citation counts, and failing to follow the rules of [[Boolean logic]] when combining search terms.<ref>{{cite journal |last=Jacsó |first=Péter |year=2006 |title=Dubious hit counts and cuckoo's eggs |journal=Online Information Review |volume=30 |issue=2 |pages=188–93 |doi=10.1108/14684520610659201 }}</ref> For example, the Meho and Yang study found that Google Scholar identified 53% more citations than Web of Science and Scopus combined, but noted that because most of the additional citations reported by Google Scholar were from low-impact journals or conference proceedings, they did not significantly alter the relative ranking of the individuals. It has been suggested that in order to deal with the sometimes wide variation in ''h'' for a single academic measured across the possible citation databases, one should assume false negatives in the databases are more problematic than false positives and take the maximum ''h'' measured for an academic.<ref>{{cite journal |last=Sanderson |first=Mark |year=2008 |title=Revisiting ''h'' measured on UK LIS and IR academics |journal=Journal of the American Society for Information Science and Technology |volume=59 |issue=7 |pages=1184–90 |doi=10.1002/asi.20771 |citeseerx=10.1.1.474.1990 }}</ref>
Each database is likely to produce a different ''h'' for the same scholar, because of different coverage.<ref>{{Cite journal | doi = 10.1007/s11192-008-0216-y| title = Which ''h''-index? – A comparison of WoS, Scopus and Google Scholar| journal = Scientometrics| volume = 74| issue = 2| pages = 257–71| year = 2007| last1 = Bar-Ilan | first1 = J.| s2cid = 29641074|author1-link= Judit Bar-Ilan }}</ref> A detailed study showed that the Web of Science has strong coverage of journal publications, but poor coverage of high impact conferences. Scopus has better coverage of conferences, but poor coverage of publications prior to 1996; Google Scholar has the best coverage of conferences and most journals (though not all), but like Scopus has limited coverage of pre-1990 publications.<ref name="Meho2007">{{cite journal |last=Meho |first=L. I. |author2=Yang, K. |year=2007 |title=Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar |journal=[[Journal of the American Society for Information Science and Technology]] |volume=58 |issue=13 |pages=2105–25 |doi=10.1002/asi.20677 }}</ref><ref name=Meho&Yang2006>{{Cite arXiv |date=23 December 2006 |author1=Meho, L. I. |author2=Yang, K |title=A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar |eprint=cs/0612132 }} (preprint of paper published as 'Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar', in ''Journal of the American Society for Information Science and Technology'', Vol. '''58''', No. 13, 2007, 2105–25)</ref> The exclusion of conference proceedings papers is a particular problem for scholars in [[computer science]], where conference proceedings are considered an important part of the literature.<ref>{{Cite journal |title=Research Evaluation for Computer Science |journal=[[Communications of the ACM]] |volume=52 |issue=4 |year=2009 |pages=31–34 |last1=Meyer |first1=Bertrand |author1-link=Bertrand Meyer |last2=Choppy |first2=Christine |last3=Staunstrup |first3=Jørgen |last4=Van Leeuwen |first4=Jan |author4-link=Jan van Leeuwen |doi=10.1145/1498765.1498780 |s2cid=8625066 |url=http://www.informatics-europe.org/docs/research-eval.php }}.</ref> Google Scholar has been criticized for producing "phantom citations," including [[gray literature]] in its citation counts, and failing to follow the rules of [[Boolean logic]] when combining search terms.<ref>{{cite journal |last=Jacsó |first=Péter |year=2006 |title=Dubious hit counts and cuckoo's eggs |journal=Online Information Review |volume=30 |issue=2 |pages=188–93 |doi=10.1108/14684520610659201 }}</ref> For example, the Meho and Yang study found that Google Scholar identified 53% more citations than Web of Science and Scopus combined, but noted that because most of the additional citations reported by Google Scholar were from low-impact journals or conference proceedings, they did not significantly alter the relative ranking of the individuals. It has been suggested that in order to deal with the sometimes wide variation in ''h'' for a single academic measured across the possible citation databases, one should assume false negatives in the databases are more problematic than false positives and take the maximum ''h'' measured for an academic.<ref>{{cite journal |last=Sanderson |first=Mark |year=2008 |title=Revisiting ''h'' measured on UK LIS and IR academics |journal=Journal of the American Society for Information Science and Technology |volume=59 |issue=7 |pages=1184–90 |doi=10.1002/asi.20771 |citeseerx=10.1.1.474.1990 }}</ref>

Revision as of 16:05, 1 November 2022

The h-index is an author-level metric that measures both the productivity and citation impact of the publications, initially used for an individual scientist or scholar. The h-index correlates with obvious success indicators such as winning the Nobel Prize, being accepted for research fellowships and holding positions at top universities.[1] The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal[2] as well as a group of scientists, such as a department or university or country.[3] The index was suggested in 2005 by Jorge E. Hirsch, a physicist at UC San Diego, as a tool for determining theoretical physicists' relative quality[4] and is sometimes called the Hirsch index or Hirsch number.

Definition and purpose

h-index from a plot of numbers of citations for an author's numbered papers (arranged in decreasing order)

The h-index is defined as the maximum value of h such that the given author/journal has published at least h papers that have each been cited at least h times.[5] The index is designed to improve upon simpler measures such as the total number of citations or publications. The index works best when comparing scholars working in the same field, since citation conventions differ widely among different fields.[6]

Calculation

The h-index is the largest number h such that h articles have at least h citations each. For example, if an author has five publications, with 9, 7, 6, 2, and 1 citations (ordered from greatest to least), then the author's h-index is 3, because the author has three publications with 3 or more citations. However, the author does not have four publications with 4 or more citations.

Clearly, an author's h-index can only be as great as their number of publications. For example, an author with only one publication can have a maximum h-index of 1 (if their publication has 1 or more citations). On the other hand, an author with many publications, each with only 1 citation, would have a h-index of 1.

Formally, if f is the function that corresponds to the number of citations for each publication, we compute the h-index as follows: First we order the values of f from the largest to the lowest value. Then, we look for the last position in which f is greater than or equal to the position (we call h this position). For example, if we have a researcher with 5 publications A, B, C, D, and E with 10, 8, 5, 4, and 3 citations, respectively, the h-index is equal to 4 because the 4th publication has 4 citations and the 5th has only 3. In contrast, if the same publications have 25, 8, 5, 3, and 3 citations, then the index is 3 (i.e. the 3rd position) because the fourth paper has only 3 citations.

f(A)=10, f(B)=8, f(C)=5, f(D)=4, f(E)=3 → h-index=4
f(A)=25, f(B)=8, f(C)=5, f(D)=3, f(E)=3 → h-index=3

If we have the function f ordered in decreasing order from the largest value to the lowest one, we can compute the h-index as follows:

h-index (f) =

The Hirsch index is analogous to the Eddington number, an earlier metric used for evaluating cyclists. The h-index serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher. Because only the most highly cited articles contribute to the h-index, its determination is a simpler process. Hirsch has demonstrated that h has high predictive value for whether a scientist has won honors like National Academy membership or the Nobel Prize. The h-index grows as citations accumulate and thus it depends on the "academic age" of a researcher.

Input data

The h-index can be manually determined by using citation databases or using automatic tools. Subscription-based databases such as Scopus and the Web of Science provide automated calculators. From July 2011 Google have provided an automatically calculated h-index and i10-index within their own Google Scholar profile.[7] In addition, specific databases, such as the INSPIRE-HEP database can automatically calculate the h-index for researchers working in high energy physics.

Each database is likely to produce a different h for the same scholar, because of different coverage.[8] A detailed study showed that the Web of Science has strong coverage of journal publications, but poor coverage of high impact conferences. Scopus has better coverage of conferences, but poor coverage of publications prior to 1996; Google Scholar has the best coverage of conferences and most journals (though not all), but like Scopus has limited coverage of pre-1990 publications.[9][10] The exclusion of conference proceedings papers is a particular problem for scholars in computer science, where conference proceedings are considered an important part of the literature.[11] Google Scholar has been criticized for producing "phantom citations," including gray literature in its citation counts, and failing to follow the rules of Boolean logic when combining search terms.[12] For example, the Meho and Yang study found that Google Scholar identified 53% more citations than Web of Science and Scopus combined, but noted that because most of the additional citations reported by Google Scholar were from low-impact journals or conference proceedings, they did not significantly alter the relative ranking of the individuals. It has been suggested that in order to deal with the sometimes wide variation in h for a single academic measured across the possible citation databases, one should assume false negatives in the databases are more problematic than false positives and take the maximum h measured for an academic.[13]

Examples

Little systematic investigation has been done on how the h-index behaves over different institutions, nations, times and academic fields.[14] Hirsch suggested that, for physicists, a value for h of about 12 might be typical for advancement to tenure (associate professor) at major [US] research universities. A value of about 18 could mean a full professorship, 15–20 could mean a fellowship in the American Physical Society, and 45 or higher could mean membership in the United States National Academy of Sciences.[15] Hirsch estimated that after 20 years a "successful scientist" would have an h-index of 20, an "outstanding scientist" would have an h-index of 40, and a "truly unique" individual would have an h-index of 60.[4]

For the most highly cited scientists in the period 1983–2002, Hirsch identified the top 10 in the life sciences (in order of decreasing h): Solomon H. Snyder, h = 191; David Baltimore, h = 160; Robert C. Gallo, h = 154; Pierre Chambon, h = 153; Bert Vogelstein, h = 151; Salvador Moncada, h = 143; Charles A. Dinarello, h = 138; Tadamitsu Kishimoto, h = 134; Ronald M. Evans, h = 127; and Ralph L. Brinster, h = 126. Among 36 new inductees in the National Academy of Sciences in biological and biomedical sciences in 2005, the median h-index was 57.[4] However, Hirsch noted that values of h will vary among disparate fields.[4]

Among the 22 scientific disciplines listed in the Essential Science Indicators citation thresholds [thus excluding non-science academics], physics has the second most citations after space science.[16] During the period January 1, 2000 – February 28, 2010, a physicist had to receive 2073 citations to be among the most cited 1% of physicists in the world.[16] The threshold for space science is the highest (2236 citations), and physics is followed by clinical medicine (1390) and molecular biology & genetics (1229). Most disciplines, such as environment/ecology (390), have fewer scientists, fewer papers, and fewer citations.[16] Therefore, these disciplines have lower citation thresholds in the Essential Science Indicators, with the lowest citation thresholds observed in social sciences (154), computer science (149), and multidisciplinary sciences (147).[16]

Numbers are very different in social science disciplines: The Impact of the Social Sciences team at London School of Economics found that social scientists in the United Kingdom had lower average h-indices. The h-indices for ("full") professors, based on Google Scholar data ranged from 2.8 (in law), through 3.4 (in political science), 3.7 (in sociology), 6.5 (in geography) and 7.6 (in economics). On average across the disciplines, a professor in the social sciences had an h-index about twice that of a lecturer or a senior lecturer, though the difference was the smallest in geography.[17]

Advantages

Hirsch intended the h-index to address the main disadvantages of other bibliometric indicators. The total number of papers metric does not account for the quality of scientific publications. The total number of citations metric, on the other hand, can be heavily affected by participation in a single publication of major influence (for instance, methodological papers proposing successful new techniques, methods or approximations, which can generate a large number of citations). The h-index is intended to measure simultaneously the quality and quantity of scientific output.

Criticism

There are a number of situations in which h may provide misleading information about a scientist's output.[18] Some of these failures are not exclusive to the h-index but rather shared with other author-level metrics.

Misrepresentation of data

The h-index does not account for the typical number of citations in different fields. Citation behavior in general is affected by field-dependent factors,[19] which may invalidate comparisons not only across disciplines but even within different fields of research of one discipline.[20] The h-index discards the information contained in author placement in the authors' list, which in some scientific fields is significant though in others it is not.[21][22] The h-index is a natural number that reduces its discriminatory power. Ruane and Tol therefore propose a rational h-index that interpolates between h and h + 1.[23]

Prone to manipulation

Weaknesses apply to the purely quantitative calculation of scientific or academic output. Like other metrics that count citations, the h-index can be manipulated by coercive citation, a practice in which an editor of a journal forces authors to add spurious citations to their own articles before the journal will agree to publish it.[24][25] The h-index can be manipulated through self-citations,[26][27][28] and if based on Google Scholar output, then even computer-generated documents can be used for that purpose, e.g. using SCIgen.[29]

Other shortcomings

The h-index has been found in one study to have slightly less predictive accuracy and precision than the simpler measure of mean citations per paper.[30] However, this finding was contradicted by another study by Hirsch.[31] The h-index does not provide a significantly more accurate measure of impact than the total number of citations for a given scholar. In particular, by modeling the distribution of citations among papers as a random integer partition and the h-index as the Durfee square of the partition, Yong[32] arrived at the formula , where N is the total number of citations, which, for mathematics members of the National Academy of Sciences, turns out to provide an accurate (with errors typically within 10–20 percent) approximation of h-index in most cases.

Alternatives and modifications

Various proposals to modify the h-index in order to emphasize different features have been made.[33][34][35][36][37][38] As the variants have proliferated, comparative studies have become possible showing that most proposals are highly correlated with the original h-index and therefore largely redundant,[39] although alternative indexes may be important to decide between comparable CVs, as often the case in evaluation processes. These alternative metrics are applicable for author-level and journal-level rankings.

Applications

Indices similar to the h-index have been applied outside of author level metrics.

The h-index has been applied to Internet Media, such as YouTube channels. It is defined as the number of videos with ≥ h × 105 views. When compared with a video creator's total view count, the h-index and g-index better capture both productivity and impact in a single metric.[40]

A successive Hirsch-type-index for institutions has also been devised.[41][42] A scientific institution has a successive Hirsch-type-index of i when at least i researchers from that institution have an h-index of at least i.

See also

References

  1. ^ Bornmann, Lutz; Daniel, Hans-Dieter (July 2007). "What do we know about the h-index?". Journal of the American Society for Information Science and Technology. 58 (9): 1381–1385. doi:10.1002/asi.20609.
  2. ^ Suzuki, Helder (2012). "Google Scholar Metrics for Publications". googlescholar.blogspot.com.br.
  3. ^ Jones, T.; Huggett, S.; Kamalski, J. (2011). "Finding a Way Through the Scientific Literature: Indexes and Measures". World Neurosurgery. 76 (1–2): 36–38. doi:10.1016/j.wneu.2011.01.015. PMID 21839937.
  4. ^ a b c d Hirsch, J. E. (15 November 2005). "An index to quantify an individual's scientific research output". PNAS. 102 (46): 16569–72. arXiv:physics/0508025. Bibcode:2005PNAS..10216569H. doi:10.1073/pnas.0507655102. PMC 1283832. PMID 16275915.
  5. ^ McDonald, Kim (8 November 2005). "Physicist Proposes New Way to Rank Scientific Output". PhysOrg. Retrieved 13 May 2010.
  6. ^ "Impact of Social Sciences – 3: Key Measures of Academic Influence". LSE Impact of Social Sciences Blog (Section 3.2). London School of Economics. 19 November 2010. Retrieved 19 April 2020.
  7. ^ Google Scholar Citations Help, retrieved 2012-09-18.
  8. ^ Bar-Ilan, J. (2007). "Which h-index? – A comparison of WoS, Scopus and Google Scholar". Scientometrics. 74 (2): 257–71. doi:10.1007/s11192-008-0216-y. S2CID 29641074.
  9. ^ Meho, L. I.; Yang, K. (2007). "Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar". Journal of the American Society for Information Science and Technology. 58 (13): 2105–25. doi:10.1002/asi.20677.
  10. ^ Meho, L. I.; Yang, K (23 December 2006). "A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar". arXiv:cs/0612132. (preprint of paper published as 'Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar', in Journal of the American Society for Information Science and Technology, Vol. 58, No. 13, 2007, 2105–25)
  11. ^ Meyer, Bertrand; Choppy, Christine; Staunstrup, Jørgen; Van Leeuwen, Jan (2009). "Research Evaluation for Computer Science". Communications of the ACM. 52 (4): 31–34. doi:10.1145/1498765.1498780. S2CID 8625066..
  12. ^ Jacsó, Péter (2006). "Dubious hit counts and cuckoo's eggs". Online Information Review. 30 (2): 188–93. doi:10.1108/14684520610659201.
  13. ^ Sanderson, Mark (2008). "Revisiting h measured on UK LIS and IR academics". Journal of the American Society for Information Science and Technology. 59 (7): 1184–90. CiteSeerX 10.1.1.474.1990. doi:10.1002/asi.20771.
  14. ^ Turaga, Kiran K.; Gamblin, T. Clark (July 2012). "Measuring the Surgical Academic Output of an Institution: The "Institutional" H-Index". Journal of Surgical Education. 69 (4): 499–503. doi:10.1016/j.jsurg.2012.02.004. PMID 22677589.
  15. ^ Peterson, Ivars (December 2, 2005). "Rating Researchers". Science News. Retrieved 13 May 2010.
  16. ^ a b c d "Citation Thresholds (Essential Science Indicators)". Science Watch. Thomson Reuters. May 1, 2010. Archived from the original on 5 May 2010. Retrieved 13 May 2010.
  17. ^ "Impact of Social Sciences – 3: Key Measures of Academic Influence". Impact of Social Sciences, LSE.ac.uk. 19 November 2010. Retrieved 14 November 2020.
  18. ^ Wendl, Michael (2007). "H-index: however ranked, citations need context". Nature. 449 (7161): 403. Bibcode:2007Natur.449..403W. doi:10.1038/449403b. PMID 17898746.
  19. ^ Bornmann, L.; Daniel, H. D. (2008). "What do citation counts measure? A review of studies on citing behavior". Journal of Documentation. 64 (1): 45–80. doi:10.1108/00220410810844150. hdl:11858/00-001M-0000-0013-7A94-3.
  20. ^ Anauati, Victoria; Galiani, Sebastian; Gálvez, Ramiro H. (2016). "Quantifying the Life Cycle of Scholarly Articles Across Fields of Economic Research". Economic Inquiry. 54 (2): 1339–1355. doi:10.1111/ecin.12292. hdl:10.1111/ecin.12292. ISSN 1465-7295. S2CID 154806179.
  21. ^ Sekercioglu, Cagan H. (2008). "Quantifying coauthor contributions" (PDF). Science. 322 (5900): 371. doi:10.1126/science.322.5900.371a. PMID 18927373. S2CID 47571516.
  22. ^ Zhang, Chun-Ting (2009). "A proposal for calculating weighted citations based on author rank". EMBO Reports. 10 (5): 416–17. doi:10.1038/embor.2009.74. PMC 2680883. PMID 19415071.
  23. ^ Ruane, F.P.; Tol, R.S.J. (2008). "Rational (successive) H-indices: An application to economics in the Republic of Ireland". Scientometrics. 75 (2): 395–405. doi:10.1007/s11192-007-1869-7. hdl:1871/31768. S2CID 6541932.
  24. ^ Wilhite, A. W.; Fong, E. A. (2012). "Coercive Citation in Academic Publishing". Science. 335 (6068): 542–3. Bibcode:2012Sci...335..542W. doi:10.1126/science.1212540. PMID 22301307. S2CID 30073305.
  25. ^ Noorden, Richard Van (February 6, 2020). "Highly cited researcher banned from journal board for citation abuse". Nature. 578 (7794): 200–201. Bibcode:2020Natur.578..200V. doi:10.1038/d41586-020-00335-7. PMID 32047304.
  26. ^ Gálvez, Ramiro H. (March 2017). "Assessing author self-citation as a mechanism of relevant knowledge diffusion". Scientometrics. 111 (3): 1801–1812. doi:10.1007/s11192-017-2330-1. S2CID 6863843.
  27. ^ Christoph Bartneck & Servaas Kokkelmans; Kokkelmans (2011). "Detecting h-index manipulation through self-citation analysis". Scientometrics. 87 (1): 85–98. doi:10.1007/s11192-010-0306-5. PMC 3043246. PMID 21472020.
  28. ^ Emilio Ferrara & Alfonso Romero; Romero (2013). "Scientific impact evaluation and the effect of self-citations: Mitigating the bias by discounting the h-index". Journal of the American Society for Information Science and Technology. 64 (11): 2332–39. arXiv:1202.3119. doi:10.1002/asi.22976. S2CID 12693511.
  29. ^ Labbé, Cyril (2010). Ike Antkare one of the great stars in the scientific firmament (PDF). Laboratoire d'Informatique de Grenoble RR-LIG-2008 (technical report) (Report). Joseph Fourier University.
  30. ^ Sune Lehmann; Jackson, Andrew D.; Lautrup, Benny E. (2006). "Measures for measures". Nature. 444 (7122): 1003–04. Bibcode:2006Natur.444.1003L. doi:10.1038/4441003a. PMID 17183295. S2CID 3099364.
  31. ^ Hirsch J. E. (2007). "Does the h-index have predictive power?". PNAS. 104 (49): 19193–98. arXiv:0708.0646. Bibcode:2007PNAS..10419193H. doi:10.1073/pnas.0707962104. PMC 2148266. PMID 18040045.
  32. ^ Yong, Alexander (2014). "Critique of Hirsch's Citation Index: A Combinatorial Fermi Problem" (PDF). Notices of the American Mathematical Society. 61 (11): 1040–1050. arXiv:1402.4357. doi:10.1090/noti1164. S2CID 119126314.
  33. ^ Batista P. D.; et al. (2006). "Is it possible to compare researchers with different scientific interests?". Scientometrics. 68 (1): 179–89. arXiv:physics/0509048. doi:10.1007/s11192-006-0090-4. S2CID 119068816.
  34. ^ Sidiropoulos, Antonis; Katsaros, Dimitrios; Manolopoulos, Yannis (2007). "Generalized Hirsch h-index for disclosing latent facts in citation networks". Scientometrics. 72 (2): 253–80. CiteSeerX 10.1.1.76.3617. doi:10.1007/s11192-007-1722-z. S2CID 14919467.
  35. ^ Jayant S Vaidya (December 2005). "V-index: A fairer index to quantify an individual's research output capacity". BMJ. 331 (7528): 1339–c–40–c. doi:10.1136/bmj.331.7528.1339-c. PMC 1298903. PMID 16322034.
  36. ^ Katsaros D., Sidiropoulos A., Manolopous Y., (2007), Age Decaying H-Index for Social Network of Citations in Proceedings of Workshop on Social Aspects of the Web Poznan, Poland, April 27, 2007
  37. ^ Anderson, Thomas R.; Hankin, Robin K. S.; Killworth, Peter D. (12 July 2008). "Beyond the Durfee square: Enhancing the h-index to score total publication output". Scientometrics. 76 (3). Springer Science and Business Media LLC: 577–588. doi:10.1007/s11192-007-2071-2. ISSN 0138-9130.
  38. ^ Baldock, Clive; Ma, Ruimin; Orton, Colin G. (5 March 2009). "The h index is the best measure of a scientist's research productivity". Medical Physics. 36 (4). Wiley: 1043–1045. Bibcode:2009MedPh..36.1043B. doi:10.1118/1.3089421. ISSN 0094-2405. PMID 19472608.
  39. ^ Bornmann, L.; et al. (2011). "A multilevel meta-analysis of studies reporting correlations between the h-index and 37 different h-index variants". Journal of Informetrics. 5 (3): 346–59. doi:10.1016/j.joi.2011.01.006.
  40. ^ Hovden, R. (2013). "Bibliometrics for Internet media: Applying the h-index to YouTube". Journal of the American Society for Information Science and Technology. 64 (11): 2326–31. arXiv:1303.0766. doi:10.1002/asi.22936. S2CID 38708903.
  41. ^ Kosmulski, M. (2006). "I – a bibliometric index". Forum Akademickie. 11: 31.
  42. ^ Prathap, G. (2006). "Hirsch-type indices for ranking institutions' scientific research output". Current Science. 91 (11): 1439.

Further reading