Quotation Statistics and Culture in Literature and in Other Humanist Disciplines
Quotation Statistics and Culture in Literature and in Other Humanist Disciplines
Quotation Statistics and Culture in Literature and in Other Humanist Disciplines
Remigius Bunia
1 Introduction
We face a fascinating, yet strange contradiction in the humanities: On the one hand,
they disapprove of any bibliometric assessments of academic performance, and, on
the other hand, they cherish quotations as a core component of their academic cul-
ture. Their dissatisfaction with quantitative bibliometrics may seem to be a mere
matter of principle: The humanities are supposed to avoid numbers wherever they
can. But this would be an explanation much too simple to account for the intrica-
cies of the quotation culture in the humanities. What is odd is the fact that many
disciplines in the humanities quote but do so very rarely. Particularly, Literature1
shows a strong dislike for a systematic compilation of references. Literature is an
1 I use the term Literature (uppercase) for all related disciplines: Literary Studies, German
R. Bunia (B)
Leo-Spitzer-Institut, Zillestraße 81, 10585 Berlin, Germany
e-mail: [email protected]
Quotations have always been part of the core techniques in Literature. Let me give
a short historic overview (for a more detailed version and for references, see Bunia
2011b). Even before the surge of modern science, all philosophical disciplines quoted
the ‘authorities’ and, thus, worshipped canonized authors. Book titles were even
invented because Aristotle needed to quote himself (cf. Schmalzriedt 1970). With the
advent of the rationalist and empiric movements in the 17th century and their icons,
René Descartes and Francis Bacon, respectively, in all disciplines, novelty became
prestigious, and both scholars and scientists started quoting their peers rather than
Ancient authorities. Not until the late 19th century did quoting that completely covers
the field become a moral obligation. Before, it was sufficient to cite what lay at hand;
it was not the researcher’s task to show blatantly that he was up to date. The increase
of publications led to new worries and, finally, caused the need for citation analysis
as pioneered by Eugene Garfield.
In Literature, it has always been mandatory to quote as much as possible to prove
that one is well read. In fact, ‘monster footnotes’ (Nimis 1984) are particularly
popular in the humanities: they consist of lengthy enumerations of papers related
to the topic of the citing paper (see also Hellqvist 2010, pp. 313–316). As Hüser
Quotation Statistics and Culture … 135
(1992) notes, an impressively long list of references is one of the most important
prerequisites for a doctoral dissertation to be accepted in Literature. These observa-
tions are not in conflict with the (very debatable) claim that humanities, in general,
do not aim to convey pieces of ‘positive’ knowledge (MacDonnald 1994), since it
does not matter whether one quotes to present knowledge or more obscure forms of
excellence. Since the broad usage of references came up in the 19th century, when
humanist disciplines tried to become ‘scientific’ (Hellqvist 2010, p. 311), the differ-
ence between the humanities and the sciences should not be taken to be very strong.
In brief, literary scholars are usually expected to quote one another extensively, not
to omit any possible reference, and to provide comprehensive lists of preceding
publications.
Many disciplines limit the obligation to quote comprehensively to recent years
and choose other forms of worship for their great minds (e.g. name of theorems
in mathematics, see Bunia 2013). Contrary to this practice, literary scholars often
cite old canonical works, thus evoking the very roots of their approach. Even more
frequent is the practice of using quotations to signal the in-group the scholar belongs
to (see Bunia and Dembeck 2010). This is why publications in Literature (in fact, in
all disciplines in the humanities) tend to include large lists of old texts.
Two practices challenge my short outline. First, literary scholars also quote the
objects of their investigation, e.g. literary, philosophical, or other texts. These appear
in the references, too, thus complicating the analysis (see Sect. 3.3). Second, in very
conservative circles—and, fortunately, such circles are not numerous—highly estab-
lished professors are no longer expected to quote unknown young scholars; they
restrict their open quotations to peers equal in rank and to classic authors such as
Aristotle (see Bunia 2013).
Reputation is highly important (see Luhmann 1990 [Reprint 1998], p. 247;
Ochsner et al. 2013, pp. 83, 84, in particular, item 14 ‘Research with reception’).
As is the case in most disciplines, literary scholars hold intellectual impact on their
own community in high esteem (Hug et al. 2013, pp. 374 and 382, for English Lit-
erature and German Literature). This is one of the criteria to be used to judge young
researchers’ performance. Intellectual influence becomes manifest due to quotations.
In sum, citation analysis should be a method adequate to the disciplinary traditions
of Literature.
The most widespread criticism advanced by scholars in the humanities attacks biblio-
metric analysis for its inability to measure quality. Unfortunately, this attack suffers
from a basic misconception. First, it neglects the circumspection that fuels much
of the bibliometric debate. For instance, bibliometric research papers are replete
with doubts, questionings and reservations about using bibliometric parameters to
rate an individual researcher’s intellectual performance (e.g. Bornmann 2013). The
central misapprehension, however, is the product of a more fundamental skepticism
136 R. Bunia
that asks: How is it possible that quantitative analysis can account for qualitative
evaluations? Consequently, bibliometric analyses are thought to be structurally inad-
equate to express qualitative judgments.
This deduction is a misconception of citation analysis because it ignores the
abstract separation of qualitative judgments and their mapping on quotations. When
we look at the impact system prevalent in many disciplines, such as Medicine, we
see that the qualitative assessment takes place in peer review. This process is not
influenced or even compromised by the impact factor culture (see also Bornmann
2013, p. 3). Of course, the impact factor culture produces, stabilizes and usually
boosts the differentiation between journals. The effect is that some journals receive
the most attention and the best submissions because these journals have the biggest
impact. This eventually means that these journals can have the most rigorous selection
process. The decisive factors within the selection process remain ‘qualitative’, that
is, they are not superseded by mathematical criteria. This is why all peer review
systems have been repeatedly demonstrated to be prone to failure (see the editorial
by Rennie 2002; see also Bohannon 2013).
For review processes to warrant optimal evaluation, it is mandatory that the review
process rely on accepted and mutually intelligible criteria. The problems with peer
review result from the imperfections of the process: careless reviewers, practical
limits of verifiability, or missing criteria. Slightly neglectful reviewers do not impair
the review process to a dramatic degree; the review process must no longer, as has
been previously done, be mistaken for a surrogate of replications. The combination
of peer review and bibliometrics provides a suitable technique to map qualitative
evaluations on quantities.
However, the situation is the inverse if disciplinary standards of assessment are
deficient. If shared criteria of evaluation are weak and if parochialism prevails, peer
review can have negative effects on the average quality of evaluations (Squazzoni
and Gandelli 2012, p. 273). As a consequence, the humanist disciplines that oppose
bibliometrics might be right in doing so—but for the wrong reasons: The only sensible
reason to object to bibliometric assessment is to admit an absence of qualitative
criteria.
The disciplines in the humanities feel increasing pressure from funding agencies
and governments to expose their strategies of evaluation (cf. Wiemer 2011). Due to
the widespread and virtually unanimous refusal to participate in common ranking
systems as those provided by bibliometric analysis, the European Science Founda-
tion (http://www.esf.org) initiated the European Reference Index for the Humanities
(ERIH) project. The project decisively dismisses all statistical approaches as inad-
equate for the humanities and replaces them by a survey conducted among highly
distinguished scholars who were asked to name the most prestigious journals in their
respective fields. The result is a list grouped into three categories: ‘INT1’, ‘INT2’
Quotation Statistics and Culture … 137
and ‘NAT’. This order indicates the (descending) importance of the journals in the
respective category. Again, quite resolutely, the list is meant to be no ranking: ‘[Ques-
tion:] Is ERIH a ranking system? [Answer:] ERIH is not a billiometric [sic] tool or
a reanking [sic] system. The aim of ERIH is to enhance the global visibility of high-
quality research in the Humanities across all of Europe and to facilitate access to
research journals published in all European languages; it is not to rank journals or
articles’ (European Science Foundation 2014). Compiled by only four to six Euro-
pean scholars per discipline, the list is not undisputedly acknowledged; as far as I
know, it is not even widely known.
Garfield himself has always pointed out that the citation analysis of journals refers
only to the usage of a published text; it does not say anything about approval or
disapproval, nor does it assess the quality of a paper (Garfield 1979, p. 148). He then
notices that the citation network allows its users to know what new developments
emerge. It thus enables them to focus on prevalent trends. This idea can be put
differently: High quotation rates and dense subnets show a strong cohesion of the
group.
There may be two main reasons for the cohesion that becomes visible because of
the quotation network. (1) First, it can derive from shared convictions about scien-
tific rigor. Only publications that comply with the methodological demands of the
respective discipline will have a chance to be cited. Regardless of the quality, origi-
nality and importance of the paper, cohesion makes the author belong to the specific
group. Anecdotally, Kahneman reports that his success in Economics is due to only
one improbable and lucky event: one of his articles being accepted in an important
economic (rather than psychological) journal (Kahnemann 2011, p. 271). In this first
case, cohesion warrants at least minimal standards of scientific procedure. (2) Then
again, cohesion can simply result from a feeling of mutual affection and enthusi-
asm. In this second case, the cohesion comes first and stabilizes itself. It relies on
the well-known in-group bias, i.e. the preference for one’s own group. For example,
members of pseudoscientific communities will cite one another (such as followers of
homeopathy). If such a group is large enough, it will produce high quotation levels.
As a consequence, impressive quotation rates do not say what kind of agreement
or conformity a respective group chooses as its foundation. It can be scientific rigor;
but it can also be anything else. This conclusion is not new and not important for
my argument. However, its reverse is. If a group shows low quotation levels, it
necessarily lacks cohesion. It possesses neither clear standards of methodological
rigor nor a feeling of community.
138 R. Bunia
3.2 Results
Let us examine the citation analysis provided by Scopus for the subject category
Literature and Literary Theory and the year 2012 (see Table 1). The absolute numbers
of the top five most influential journals are strikingly low. The top journal, Gema
Online Journal of Language Studies, which, by the way, I had never heard of before,
does not appear in the ERIH ranking at all (Sect. 2.3). This journal is ranked first
with regard to the SJR2 indicator implemented by Scopus. The strange phenomenon
is easily explained: The journal focuses on linguistics; in the respective ranking
(‘Language and Linguistics’), it holds only position 82. Since it sometimes publishes
Table 1 The five highest ranking publications in the subject category Literature and Literary Theory in 2012 (citation data by Scopus)
Quotation Statistics and Culture …
Title SJR H-index Total Total Total Total Citable Cites Ref. Doc.
Docs.a Docs.b Refs. Citesb Docs.b Doc.c
Gema Online Journal of Language Studies 0.470 6 72 71 1,870 85 67 1.25 25.97
New Literary History 0.416 9 38 142 1,659 68 132 0.61 43.66
Shakespeare Quarterly 0.366 7 16 68 940 21 61 0.21 58.75
College Composition and Communication 0.353 12 30 160 1,006 59 138 0.42 33.53
Journal of Biblical Literature 0.343 8 38 143 2,762 34 141 0.22 72.68
Note SCImago Journal and Country Rank, JOURNAL RANKING: Subject Area: All, Subject Category: Literature and Literary Theory, Country: All, Year:
2012
a In 2012. b During 3 years. c During 2 years
139
140 R. Bunia
articles in Literature, too, it is included in both lists; since the SJR2 indicator does
not detect disciplinary boundaries, a comparatively mild impact in Language and
Linguistics can make it the most prestigious journal in Literature and Literary Theory.
Presumably, this effect must follow from the small numbers involved in quotations
in Literature and Literary Theory so as to allow an interdisciplinary journal to move
to the first position.
The second journal might be worth a closer look. New Literary History belongs
to the highest ERIH category (‘INT1’); personally, I would have guessed it might be
among the top journals. This prestigious periodical, however, does not seem to be
quoted very often, if one inspects the numbers provided by Scopus (see Table 2). For
the 142 articles published between 2009 and 2011, only 68 citations were found. If
one takes the small ratios between cited and uncited documents into account, viz.
26 % for this time window, the hypothesis seems acceptable that these few citations
concentrate on few articles. The only undisputable inference is the mean citation
frequency per article: We find two citations per article on average.
It is possible to compare these numbers to those of the most influential journal
in Medicine (as ranked by the SJR2 indicator again), the New England Journal of
Medicine. In the same time window (i.e. 2009–2012), we find 5,479 articles and
65,891 citations; on average, an article garnered 12 citations, and 46 % of these
articles were cited within the time window.
As for the New Literary History, I discuss one of the journals that at least do
receive some attention (in terms of citation analysis). Let us turn to Poetica, one
of the most prestigious German journals. Within the ERIH ranking, Poetica, too,
belongs to the highest category, ‘INT1’. Yet it ranks only 313th in the Scopus list.
The more detailed numbers are disconcerting (see Table 3). Between 2009 and 2011,
the journal published altogether 48 articles, among which only three received at least
one citation (within this time window). In the long run, the quotation ratio never
exceeds 16 %; but the 6 %, which can be found in three columns (2006, 2007, 2012),
is not an exception. More astonishingly, only four citations were found. This is to
say that two articles garnered exactly one citation, and one article can be proud to
have been cited twice.
The problems that I mention apply to all entries in the ranking. On the one hand,
the absolute numbers are so low that small changes affect the position of journals;
on the other hand, interdisciplinary journals automatically move up (this effect could
be dubbed ‘cross-listing buoyancy’). The ranking does not reflect the ‘qualitative’
assessment of the European Science Foundation. These figures have significance
only as they show that quotations in Literature are rare.
My approach may face three major objections. First, absolute numbers have lim-
ited value. They are not embedded in a statistical analysis, and, therefore, they
cannot characterize the phenomenon in question. I will not deny the cogency of
Table 2 Development of citations between 2004 and 2012 for the high ranking international journal New Literary History (data by Scopus)
Indicators 2004 2005 2006 2007 2008 2009 2010 2011 2012
SJR 0.166 0.129 0.236 0.218 0.202 0.112 0.255 0.361 0.416
Total Docs. 39 38 49 41 43 55 47 40 38
Quotation Statistics and Culture …
Total Docs. (3 years) 76 115 117 126 128 133 139 145 142
Total references 1,359 946 1,360 1,572 1,506 1,642 1,857 1,386 1,659
Total Cites (3 years) 26 21 36 31 29 17 40 62 68
Citable Docs. (3 years) 74 110 110 114 116 120 127 134 132
Cites/Docs. (4 years) 0.35 0.19 0.32 0.30 0.22 0.20 0.31 0.46 0.48
Cites/Doc. (2 years) 0.35 0.15 0.33 0.31 0.24 0.19 0.19 0.57 0.61
References/Doc. 34.85 24.89 27.76 38.34 35.02 29.85 39.51 34.65 43.66
Cited Docs. 22 19 29 25 24 13 29 39 37
United Docs. 54 96 88 101 104 120 110 106 105
Ratio (cited/uncited docs) (%) 41 20 33 25 23 11 26 37 35
Note SCImago Journal and Country Rank, JOURNAL CLOSE-UP: New Literary History, Publisher: Johns Hopkins University Press, ISSN: 00286087,
1080661X. Italics indicate my own calculations
141
142
Table 3 Development of citations between 2004 and 2012 for the high ranking German language journal Poetica (data by Scopus)
Indicators 2004 2005 2006 2007 2008 2009 2010 2011 2012
SJR 0.101 0.101 0.100 0.151 0.154 0.123 0.111 0.111 0.100
Total Docs. 16 17 16 14 12 19 13 16 0
Total Docs. (3 years) 23 39 48 49 47 42 45 44 48
Total Cites (3 years) 3 2 3 4 5 5 5 8 4
Cites/Doc. (4 years) 0.14 0.05 0.07 0.11 0.11 0.10 0.13 0.19 0.07
Cites/Doc. (2 years) 0.14 0.07 0.00 0.06 0.13 0.15 0.13 0.16 0.03
References/Doc. 55.88 70.71 52.31 110.07 104.08 51.89 63.38 56.56 0.00
Cited Docs. 2 2 3 3 5 4 5 7 3
Uncited Docs. 21 37 45 46 42 38 40 37 45
Ratio (cited/uncited docs) (%) 9 5 6 6 11 10 11 16 6
Note SCImago Journal and Country Rank, JOURNAL CLOSE-UP: Poetica, Publisher: Wilhelm Fink Verlag, ISSN: 03034178. Italics indicate my own
calculations
R. Bunia
Quotation Statistics and Culture … 143
this objection. However, the point is that the low numbers themselves are the phe-
nomenon to be explained. My analysis also comprises the comparison of relative
quantities. By contrasting the ratios of uncited and cited papers across disciplines,
I can increase the plausibility of my claims. I am confident that the synopsis of all
data corroborates the hypothesis that literary scholars’ quotation rates are altogether
marginal.
The second possible objection concerns the available data about research in the
humanities. Currently, the most widespread attempt to remedy the tiny absolute num-
bers is the inclusion of books. The idea is that the databases are deficient—not the
citation culture (e.g. see Nederhof 2011, p. 128). The inclusion of monographs is
Hammarfelt’s (2012, p. 172) precept. In 2011, Thomson Reuters launched its Book
Citation Index covering books submitted by editors from 2005 onward and contin-
uously has worked on improving the Book Citation Index ever since. However, the
inclusion of monographs will not provide an easy solution. There are three obstacles:
(1) Primary versus secondary sources. In the humanities, some books are objects
of analysis, and some provide supporting arguments. In the first case, we speak of
primary, in the latter case of secondary sources. In many contexts, the distinction
between both types is blurry (see Hellqvist 2010, p. 316, for an excellent discus-
sion).2 Hammarfelt’s (2012) most radiant example, Walter Benjamin’s Illuminatio-
nen, which he states to have spread across disciplines (p. 167), is a compilation of
essays from the 1920s and 1930s. The book is cited for very different reasons. The
quotations in computer science and physics (Hammarfelt 2012, p. 167) will probably
have an ornamental character; Benjamin is a very popular supplier of chic epigraphs.
Within the humanities, Benjamin is one of the authors whose works are analysed
rather than used, that is, he is a primary source. So are other authors whom (Ham-
marfelt 2012, p. 166) counts among the canonized: Aristotle, Roland Barthes, Jacques
Derrida, etc. Even more, some of his canonized authors wrote just fiction (Ovid and
James Joyce). Hence, these monographs must be primary sources.
An algorithm that distinguishes between primary and secondary sources is difficult
to implement. The software has to discriminate between different kinds of arguments,
which requires semantic analysis. As is well known, we are far away from any sensible
linguistic analysis of texts without specific ontology (in the sense of semantics); so
even the effort will be futile. The only reliable possibility would be a systematic
distinction between primary and secondary sources in the bibliographies, a practice
common in many scholarly publications, but far from ubiquitous. With this problem
realized, it is difficult to implement an automatic analysis.
Recent publications, of course, can be counted as secondary sources per conven-
tion. This would be reasonable and useful, even if we know that the transition from
‘secondary scholar’ to ‘primary author’ is what scholars in the humanities dream of
and what they admire (cf. Ochsner et al. 2013, pp. 83–85). Quite often this happens
late, often after the scholar’s death (and his reincarnation as ‘author’), as was the case
2 Thisis why Zuccala’s (2012) similar—and barely novel—distinction between vocational and
epistemic misses the point. This article tends to overlook many problems I discuss here.
144 R. Bunia
with Benjamin, too, who was even refused a university position during his lifetime.
The usage of recent publications remains only a possibility.
The inclusion of books would not change the whole picture. The absolute numbers
would remain low. In a more or less systematic case analysis, Bauerlein (2011) shows
that scholars do not cite books either (p. 12). Quite on the contrary, Bauerlein (himself
a professor of English Literature, by the way) concludes that the production of books
is an economic waste of resources and should be stopped. Google Scholar confirms
that literary scholars quote but do so rarely. As stated above, the service includes
books. Since Google has scanned and deciphered incredibly many books, including
those from the past decade, for its service Google Books (prior to the service’s
restriction on account of massive copyright infringements), it has a pretty good
overview of the names dropped in scholarly books. Nonetheless, Google’s services
show that books are quoted as rarely as articles (if not even less frequently). We
thus count the documents cited. Scholars quote numerous sources; at least nothing
indicates that lists of references are shorter in the humanities than they are in other
disciplines. But all signs point at the possibility that only a few scholars can hope to
be quoted by their peers. The fact remains that literary scholars quote each other but
do so rarely.
(2) Reading cycles. Another remedy being discussed involves larger time win-
dows. Literary scholars are supposed to have ‘slower reading cycles’, to stumble
upon old articles and to unfold their impact much later than the original publica-
tion. Unfortunately, there is little evidence for this myth. Of course, there are many
‘delayed’ quotations in the humanities. But the problem is that they do not change
the whole picture. In the vast majority of cases, their distribution is as Poisson-like
as the ‘instantaneous’ quotations, and they are as rare. Again, the sparse data Google
provides us with do not indicate any significant increase of citations caused by a
need for long-lasting contemplation. Nor does Bauerlein find any hint of boosting
the effects of prolonged intellectual incubation periods. Nederhof (1996) claims that
in some humanist disciplines, the impact of articles reaches a peak in the third year;
hence, the chosen citation window appears adequate and meaningful.
(3) What quotations stand for. The third obstacle is different in kind. Since the
figures show small numbers, citations that do not refer to the content of the cited
articles may distort the results of the statistical analysis to a significant extent. As
recently demonstrated by Abbott (2011), a considerable percentage of citations does
not relate in any conceivable way to the cited article, which could indicate that this
article has never been actually read. Examples are easily at hand. In one of the
top journals in Literature, Poetics Today (‘INT1’), the Web of Science records two
citations of an article of mine. Unfortunately, these citations come from scholars who
use my article to introduce a notion established by Plato around 400 B.C. With two
citations, my text belongs to the very small cohort of highly cited articles, but the
actual quotations are disastrously inappropriate. This problem cannot be ruled out
in other disciplines either. There is no clue whatsoever indicating that inappropriate
quotations occur more often in the humanities than in other disciplines. Nonetheless,
we have to consider the possibility that even the small numbers found in the figures
Quotation Statistics and Culture … 145
are not the result of attentive reading, but of the need to decorate an article with as
many references as possible.
We eventually have to reconcile two apparently contradictory observations. On the
one hand, scholars present us with long lists of references and are expected to quote
as much as possible. On the other hand, each scholar can expect only little attention
and very few (if any) citations by peers. This miracle can be easily resolved: Partly,
scholars quote from other disciplines, partly, quotations cluster around certain few
‘big names’, who are quoted abundantly. There is no contradiction between long lists
of references and few citations, that is, between many incidents of citing and only a
few of being cited.
4 Discussion
As we have seen, the disciplinary culture of Literature requires scholars to quote one
another extensively, but only few citations can be found. How can this be explained?
Although I have expressed my doubts about the importance of coverage, first, more
data must be obtained: Books must be extensively included in the analysis, and the
citation windows must be enlarged, maybe up to decades. Such an improvement of
the databases does not add to the bibliometric assessment of individual scholarly
performance; instead, it adds to the understanding of the intellectual configuration of
Literature and of other related fields in the humanities. Before we start understanding
the criteria of excellence and develop a means of mapping qualitative judgments on
quantities, we must first understand why citations occur so rarely.
Perhaps publications in Literature do not contain pieces of positive informa-
tion that can be used to support one’s own argument straightforwardly. Publications
present the scholar with helpful or dubious opinions, useful theoretical perspectives,
or noteworthy criticisms, but, possibly, a publication cannot be reduced to a simple
single result. If this is the case, the question is which (societal) task Literature is
committed to. If this is not case, the lack of quotations raises the question of why so
many papers are written and published that do not attract any attention at all.
I can conceive of two explanations. (1) The first explanation concerns a possible
‘archival function’ of Literature (and related fields in the humanities). As Fohrmann
(2013) recently put it, the disciplines may be responsible for the cultural archive
(pp. 616, 617). Indeed, scholars count ‘fostering cultural memory’ among the most
important factors that increase excellence in the humanities (Hug et al. 2013, pp. 373,
382). Teaching and writing in the humanities do aim to increase knowledge and to
stabilize our cultural memory. As a consequence, seminars and scholarly publications
are costly and ephemeral, but still are necessary byproducts of society’s wish to
uphold and to update its cultural heritage.
At first glance, this may sound sarcastic, but, in fact, this explanation would imply
that the current situation might harm both the humanities and the university’s sponsors
(in Europe, these are mostly the governments and, therefore, the taxpayers). In the
1980s, the humanities had to choose whether they would adapt to the institutional
146 R. Bunia
logic of the science departments, or to move out of the core of academia and to become
cultural institutions, such as operas and museums. The humanities chose to remain
at the heart of the university and thus accepted the slow adoption of mechanisms
such as the competition for third-party funding and the numerical augmentation of
publications. Now, the humanities produce texts that no one reads, that the taxpayer
pays for and that distract the scholars from their core task: to foster the cultural
archive, to immerse oneself in old books for months and years, to gain erudition and
scholarship, and to promote the cultural heritage to young students and to society as
a whole. (This is maybe why scholars are reluctant to cherish the scholars’ impact on
society, as Hug et al. (2013, pp. 373, 382) also show. In the scholars’ view, their task
is to expose the impact of the cultural heritage on society. In a way, giving too much
room to the scholars seems to be a kind of vanity at the expense of the actual object
of their duties.) Maybe, releasing the humanities from the evaluations and structures
made for modern research disciplines would free the humanities from their bonds,
reestablish their own self-confidence and decrease the costs their current embedding
in the universities impose on the sponsors. It would be a mere question of labeling
whether the remaining and hopefully prosperous institutions could still be called
‘academic’.
(2) The second explanation, however, is less flattering. It could also turn out that
low citation frequencies indicate the moribund nature of the affected disciplines.
When I recall that citations and debates have been core practices in the humanities
for centuries, another conclusion pushes itself to the foreground: Scholars in the
affected fields feel bored when they have to read other scholars’ publications.
In the 1980s and the early 1990s, there were fierce debates, and the questions
at stake could be pinpointed (see Hüser 1992). Today, the very questions vanish;
scholars have difficulties stating what they are curious about (Bunia 2011a). If no
scholar experiences any intellectual stimulation instilled by a peer’s publication, she
will tend to read less, to turn her attention to other fields and to quote marginally.
With regard to cohesion (see Sect. 2.4), such a situation would also imply that the
scholars in the affected fields no longer form a community that would identify itself
as cohesive; one no longer feels responsible for the other and for the discipline’s
future. If all debates have ended, the vanishing quotations simply indicate a natural
death that no one has to worry about.
Both explanations will easily provoke contestations. As for the first one, one would
have to ask why scholars have never realized that they had been cast in the wrong
movie. As for the second one, there are only few hints at a considerable change
in the past 20 years. Did scholars cite each other more fervently in the 1970s and
1980s than today? I do not know. Therefore, we need more research on the schol-
ars’ work. For instance, we need to know why they read their peers’ work and if
they enjoy it. It is good that researchers, namely, Hug, Ochsner and Daniel, began
asking scholars about their criteria to understand how the scholars evaluated their
peers’ performance. But we also have to take into account the deep unsettledness
reigning in Literature and related fields (see Scholes 2011; see again Bauerlein 2011;
Bunia 2011b; Lamont 2009; Wiemer 2011). We have to thoroughly discuss a ‘cri-
terion’, e.g. ‘rigor’, which is a virtue scholars expect from others (Hug et al. 2013,
Quotation Statistics and Culture … 147
pp. 373, 382). But ‘rigor’ is characterized by ‘clear language’, ‘reflection of method’,
‘clear structure’ and ‘stringent argumentation’, which are virtues the humanities are
not widely acclaimed for and are qualities that may be assessed differently by differ-
ent scholars. In brief, these self-reported criteria have to be compared to the actual
practice. It may be confirmed that a criterion such as rigor is being consistently
applied to new works; but it may equally well turn out that the criterion is a passe-
partout that conceals a lack of intellectual cohesion in the field. Again, this means that
we first must understand what the humanities actually do before we start evaluating
the outcome of their efforts by quantitative means.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution-
Noncommercial 2.5 License (http://creativecommons.org/licenses/by-nc/2.5/) which permits any
noncommercial use, distribution, and reproduction in any medium, provided the original author(s)
and source are credited.
The images or other third party material in this chapter are included in the work’s Creative
Commons license, unless indicated otherwise in the credit line; if such material is not included
in the work’s Creative Commons license and the respective action is not permitted by statutory
regulation, users will need to obtain permission from the license holder to duplicate, adapt or
reproduce the material.
References
Guerrero-Botea, V. P., & Moya-Anegón, F. (2012). A further step forward in measuring journals’
scientific prestige: The SJR2 indicator. Journal of Informetrics, 6(4), 674–688. doi:10.1016/j.joi.
2012.07.001.
Hammarfelt, B. (2012). Following the footnotes: A bibliometric analysis of citation patterns in
literary studies. (Doctoral dissertation. Skrifter utgivna vid institutionen för ABM vid Uppsala
Universitet, Vol. 5). Uppsala: Uppsala Universitet. Retrieved from http://www.diva-portal.org/
smash/get/diva2:511996/FULLTEXT01.pdf.
Hellqvist, B. (2010). Referencing in the humanities and its implications for citation analysis. Journal
of the American Society for Information Science and Technology, 61(2), 310–318. doi:10.1002/
asi.21256.
Hug, S. E., Ochsner, M., & Daniel, H.-D. (2013). Criteria for assessing research quality in the
humanities: A Delphi study among scholars of English literature, German literature and art
history. Research Evaluation, 22(5), 369–383. doi:10.1093/reseval/rvt008.
Hüser, R. (1992). Kommissar Lohmann. Bielefeld: typescript.
Kahnemann, D. (2011). Thinking, fast and slow. London: Allan Lane.
Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Cam-
bridge, MA: Harvard University Press.
Luhmann, N. (1990 [Reprint 1998]). Die Wissenschaft der Gesellschaft. Frankfurt a. M.: Suhrkamp.
MacDonnald, S. P. (1994). Professional academic writing in the humanities and social sciences.
Carbondale, IL: Southern Illinois University Press.
Mikki, S. (2009). Google Scholar compared to Web of Science. Nordic Journal of Information
Literacy in Higher Education, 1(1), 41–51. doi:10.15845/noril.v1i1.10.
Nederhof, A. J. (1996). A bibliometric assessment of research council grants in linguistics. Research
Evaluation, 6(1), 2–12. doi:10.1093/rev/6.1.2.
Nederhof, A. J. (2011). A bibliometric study of productivity and impact of mod-
ern language and literature research. Research Evaluation, 20(2), 117–129. doi:10.3152/
095820211X12941371876508.
Nimis, S. (1984). Fussnoten: Das Fundament der Wissenschaft. Arethusa, 17, 105–134.
Ochsner, M., Hug, S. E., & Daniel, H.-D. (2013). Four types of research in the humanities: Setting the
stage for research quality criteria in the humanities. Research Evaluation, 22(4), 79–92. doi:10.
1093/reseval/rvs039.
Rennie, D. (2002). Fourth international congress on peer review in biomedical publication. The
Journal of the American Medical Association, 287(21), 2759–2760. doi:10.1001/jama.287.21.
2759.
Schmalzriedt, E. (1970). Peri Physeos: Zur Frühgeschichte der Buchtitel. München: Fink.
Scholes, R. (2011). English after the fall: From literature to textuality. Iowa City, IA: University of
Iowa Press.
Squazzoni, F., & Gandelli, C. (2012). Saint Matthews strikes again. An agent-based model of peer
review in the scientific community structures. Journal of Informetrics, 6(2), 265–275. doi:10.
1016/j.joi.2011.12.005.
Wiemer, T. (2011). Ideen messen, Lektüren verwalten? Über Qualitätskriterien literaturwis-
senschaftlicher Forschung. Journal of Literary Theory, 5(2), 263–278. doi:10.1515/JLT.2011.
024.
Zuccala, A. (2012). Quality and influence in literary work: Evaluating the ‘educated imagination’.
Research Evaluation, 21(3), 229–241. doi:10.1093/reseval/rvs017.