Lexicography and Theories of Lexical Semantics
Lexicography and Theories of Lexical Semantics
semantics
Chapter 26
26.2 HISTORICAL-PHILOLOGICAL SEMANTICS
The first stage in the history of lexical semantics as an academic discipline runs from roughly
1830 to 1930
. Its dominant characteristic is the historical orientation of lexical semantic research: its main
concern lies with changes of word meaning—the identification, classification, and explanation
of semantic changes.
represented by such major figures as Bréal (1897) and Paul (1920), can be introduced from a
descriptive and from a theoretical perspective.
the classification of mechanisms of semantic change is not just important in itself, as a branch
of historical linguistics, but it has practical importance for diachronic lexicography.
The central idea is the notion that language has to be seen as a system, and not just as a loose
bag of words, and in addition, that such a system is primarily a synchronic and not a diachronic
phenomenon.
Among the large variety of theoretical positions and descriptive methods that emerged within
the overall fines set out by a structuralist conception of meaning, four broad strands may be
distinguished: lexical field theory, componential analysis, relational semantics, and
distributional semantics.
LEXICAL FIELD THEORY- starting-point in the structuralist view that language constitutes an
intermediate conceptual level between the mind and the world, and translates that view into
the metaphorical notion of a lexical field: if you think of reality as a space of entities and
events, language so to speak draws fines within that space, dividing up the field into
conceptual plots.
A lexical field, then, is a set of semantically related lexical items whose meanings are mutually
interdependent and that together provide conceptual structure for a certain domain of reality.
should be investigated as a set of interdependent items.
COMPONENTIAL ANALYSIS- is a logical development from lexical field theory: once you have
demarcated a lexical field, the internal relations within the field have to be described in more
detail.
The Meaning-Text theory developed by MelZuk (1988; MelZuk et al. 1995) constitutes a lesser
known but no less interesting variant of the relational approach. In the Meaning-Text theory,
frequently occurring relations of this type are identified as lexical functions. Their descriptive
scope is not restricted to the relationship between lexical items, because similar relations may
also pertain to the domain of morphology and phraseology: the relationship between city and
urban, is the same as that between function and functional, and the same function that links
joy to joyfully also yields with joy. The Meaning-Text theory now distinguishes more than sixty
lexical functions. They occupy a central position in the Explanatory Combinatorial Dictionary
(MelZuk et al. 1984-99) that is the main practical achievement of Meaning-Text theory.
the essential concept here is that of collocation, defined as 'a lexical relation between two or
more words which have a tendency to cooccur within a few words of each other in running
text' (Stubbs 2001: 24). In Sinclair's original conception, a collocational analysis is basically a
heuristic device to support the lexicographer's manual work.
A further step in the development of the distributional approach was taken through the
application of statistics. A decisive step was taken when Church and Hanks (2990), working in
the context of Sinclair's Cobuild project, introduced the Pointwise Mutual Information index
(defined in terms of the probability of occurrence of the combination x,y compared to the
probabilities of x and y separately) as a statistical method for establishing the relevance of a
collocation.
Also, the statistical turn in thinking about contextual distributions allowed for a
rapprochement with the field of information retrieval and Natural Language Processing where
so-called word space models constitute an advanced form of distributional corpus analysis,
applied to problems like word sense disambiguation and synonym extraction.
PROTOTYPE THEORY- It assumes that conceptual knowledge need not necessarily take the
form of abstract definitional knowledge about a given category, but may also reside in
knowledge about the members of the category: our knowledge of what birds are in general
may at least to some extent be based on what we know about (typical) birds. This means that
extensional forms of description will also be natural from a prototype-theoretical point of
view.
highlights the fact that lexical polysemy takes the form of a multidimensional structure of
semantic extensions starting from central readings, that is, categories are characterized by
salience effects, in the sense that some readings have a stronger weight than others, and by
multiple relations among those readings.
emphasizes that semantic structures may be fuzzy, in the sense that it may not always be easy
to distinguish one meaning from the other.
CONCEPTUAL METAPHOR AND METONYMY- involve the observation that in a given language,
metaphors and metonymies often occur in groups expressing the same underlying idea.
The concept of anger, to name one of the best known examples, is often expressed by lexical
metaphors involving heat, and more specifically heated fluids: to reach boiling point, to seethe
with rage, to let off steam, to be a hothead, to fume, to be scarlet with rage, to explode with
anger, to breathe fire, to make inflammatory remarks, to boil with anger.
In Conceptual Metaphor Theory as introduced by Lakoff and Johnson. Such sets of expressions
(one might call them 'figurative lexical fields') illustrate the `cognitive' aspect of cognitive
semantics: rather than conventional expressions, conceptual metaphors and metonymies are
patterns of thought that range across the lexicon.
FRAME SEMANTICS- is the most articulate model with which cognitive semantics implements
the idea that our knowledge of the world is organized in larger `chunks of knowledge', and that
language can only be properly understood against the background of that world knowledge.
specifically interested in the way in which language may be used to perspectivize an underlying
conceptualization of the world: it's not just that we see the world in terms of conceptual
models, but those models may be verbalized in different ways.
Making systematic use of corpus materials as the main source of empirical evidence for the
frame-theoretical analyses, FrameNet offers an electronic dictionary with frame-theoretical
descriptions, similar in purpose and ambition to WordNet, but starting from a different
descriptive framework: the Berkeley FrameNet project does for frame semantics what
WordNet does for structuralist lexical relations, that is to use the model for building an online
lexical database.
CONCEPTUAL METAPHOR AND METONYMY- studies suggest ways of dealing with the links
between the senses of lexical items that go beyond common dictionary practice.
In the actual practice of lexicography, the Macmillan English Dictionary for Advanced Learners
(2007, first published in 2002) incorporates `metaphor boxes' showing, in a Lakovian vein, the
conceptual metaphors behind common expression.
PROTOTYPE THEORY- and, more generally, the cognitive linguistic view of polysemy and
categorial structure pose a challenge for a structuralist understanding of the lexicon.
Now, it does not require an extraordinary familiarity with actual dictionaries to observe that
such features (enumerations, disjunctions, reference to exemplars, etc.) do indeed occur. The
point to make is rather that from a cognitive semantic perspective these aspects of dictionaries
are the natural consequence of the nature of semantic phenomena, rather than being
imperfections that need to be improved.
26.5 PROSPECTS
On the theoretical side, as we suggested in the previous section, a cognitive semantic
conception of word meaning seems to be more congenial to typical Iexicographical data than a
structuralist one. Specifically, defining linguistics as a usage-based enterprise requires a closer
scrutiny of actual usage— precisely the kind of massive descriptive endeavour that defines
lexicography.
On the methodological side, major advances in corpus linguistics—not just in the mere
presence of abundant usage data, but specifically also in the form of tools for digitally
exploring those data—support data-driven lexicography and usage-based semantics alike.
They are paradigmatic relations in that the set of related meanings forms a paradigm of
potentially substitutable words.
hese approaches derive sense relations from words' meaning components: synonyms share
key components, antonyms differ in one key component, hyponyms have all the same
components as their hyperonyms plus at least one more. Most modern models of lexical
meaning, like actual dictionaries and thesauruses, fall between these extremes, acknowledging
the role that relations play in determining the development and construal of senses.
Hyponymy and meronymy directly reflect non-linguistic relations between denotata. Coffee is
a hyponym of drink because coffee is a subcategory of drink, and java, as coffee's sense-
synonym, is equally a hyponym of drink.
This tradition has been taken up and developed further in electronic dictionaries, especially on
the internet.
This is still at an experimental stage, for example the positioning of illustrations on the screen,
how they are linked with the word entries, and how they can be used as a mode of access.
The use of additional multimedia elements (e.g. videos, audio files) and links to external
sources also needs to be developed. But even in the case of printed dictionaries, there are still
conceptual challenges: the question of which factual information and how many and which
illustrations belong in the dictionary has not yet been definitively answered and must be
decided on again each time depending on the intended user group and function of the
dictionary.
In the last few years, there has been an impetus, particularly from the field of learners'
dictionaries, to establish how both illustrations and encyclopaedic and cultural information can
enrich a dictionary. So that lexicographical practice in this area can be further developed in a
positive way in the next few years, sound criticism of practice up until now as well as thorough
research into recently developed types of information and forms of presentation in both
printed and electronic dictionaries are essential.
The routledge handbook of lexicography
9. Lexicography and terminology
Over a quarter century ago, Sager suggested that lexicography and terminology could be
viewed as separate activities largely because of differences in the practitioners involved, the
methods used, and the nature of the data collected.
a desire to better serve users, along with new possibilities opened up by technology, have
contributed to an evolution in the working practices found in both lexicography and
terminology.
although some differences remain, the two disciplines nonetheless appear to be converging
with regard to many aspects of their practices.
both lexicographers and terminologists have sought to involve users more actively in the
production of dictionaries and terminological resources through technology-enabled
crowdsourcing.
both lexicographical and terminological investigations are now both firmly centred on corpus-
based analysis, which lends itself to a semasiological approach.
As a result, the principles and methods of terminology compilation now have an increasing
commonality with those used in lexicography as terminologists move away from an
onomasiological starting point for their research and towards a semasiological one.
Tools used for the analysis of corpora have also shown signs of bringing lexicographers and
terminologists closer together. The tools of the trade used by the two groups were first
introduced as separate packages; however, a more recent trend has witnessed the combining
of lexicographic and terminological corpus-based tools into a single tool suite (e.g. Sketch
Engine).
This combined tool offering reflects the growing need for language professionals to be able to
respond to users’ desires to have resources at their disposal, which can support both their
general language and specialized language needs
Both lexicographers and terminologists have indeed taken up this challenge to produce
resources that meet user needs. For instance, in response to user requests, the second edition
of the Macmillan English Dictionary (2007) now incorporates a range of specialist terms along
with its general language offering.
Meanwhile, term bases for private corporations, as well as those developed for use by
translators, now regularly include general language expressions alongside specialized terms.
Such a combination allows translators to capitalize on the pre-translation feature built into
many Translation Environment Tools (TEnTs). Meanwhile in a corporate environment, it
ensures that the term base supports a broader range of a company’s linguistic needs.
evolution both in lexicography and in terminology which has resulted in a higher degree of
interdisciplinarity between them than previously existed. Although Table 9.1 near the start of
this chapter summarized the binary way in which lexicography and terminology have
conventionally been presented, we have modified and updated its contents to create Table
9.2, which presents a more contemporary view of these two disciplines, drawing attention to
areas of convergence and interdisciplinarity.
both lexicography and terminology are becoming increasingly interdisciplinary, and the two
share many common perspectives. Many of the target users for lexicographic and
terminological resources actually need to work with both general and specialized language, so
it makes sense for lexicographers and terminologists to be informed about trends and
developments in each other’s disciplines and to work more closely to design and build
resources that will better serve the needs of a wide range of users. Although we have not yet
witnessed the ‘marriage’ between lexicography and terminology that was predicted by
Knowles (1988), it is heartening to see that the two have at least become closer friends.
Providing that lexicographers do reflect users’ understanding of the world and, thus, allow for
extra-linguistic considerations to be included in dictionary definitions, what we get in the latter
is powerful enhancement of the mere semantic exposition of the word (as when one assumes
that lexemes constitute a system) with the values that are inherently cognitive, symbolic,
metaphorical, experiential, and subjective (as when one assumes that words are symbols).
In what precedes, the point is exclusively linguistic, not ideological, political, or prescriptive,
and has much to do with the necessity of anchoring language analysis, advanced learner’s
dictionary making included, in the fact, as Mithun concludes, that “languages are shaped in
significant ways by the physiological, cognitive, and contextual circumstances surrounding
their use”.
In other words, this is to appreciate what we hope can become a growing and irresistible
tendency of giving dictionary definitions more and more room for, precisely, human agency, in
order to open the closed, self-regulating and self-defining system of lexical oppositions in
favour of experientially delimited and cognitively driven considerations. These include (i) giving
expression to the users’ cultural awareness, (ii) recording their relative cognitive tensions, and
(iii) demystifying symbols of their experience.
Next to the semantic definition of a word, a dictionary reader will then find a huge dose of the
world knowledge as reflected in the language. Because it is human cultural experience that
shapes language (and not the other way round), the knowledge one finds in the dictionary is
what human experience can actually be − unbalanced, unstable, dynamic, vague, loaded with
prejudice, and marked with asymmetry. This turns words into social labels that, as such, draw
“on social stereotypes, moral attitudes, old connotations and future possibilities − in short,
ideologies of various kinds”.
Indeed, human experience is so much stained with the subjectivity of the one who verbalizes it
that any attempt at objectivizing that experience (e.g. in the form of an autonomous lexical
system) can only prove futile and superfluous.
Still, bilingual DfTs have present preponderance on the market. Innovative approaches,
primarily towards dynamic monofunctionality, have been highlighted mostly in LSP
lexicography, in which greater attention is paid towards specific usage needs of translators.
. Macrostructural and microstructural analysis of selected examples has pointed out particular
tendencies depending on the represented language (LGP or LSP) and the medium (printed or
digital form).
In general, the study of various groups of dictionaries brings to light the urgent need to give a
distinct identity to dictionaries for translation.
At the same time, it seems to confirm the viability of a clear and systematic translation-
oriented approach in lexicography, which should encourage lexicographers to follow new
paths towards the elaboration of novel structural concepts with the help of state-of-the-art
technologies, and translators to ask for more specific, highquality resources.
In broad terms it can be said that the first approach is conceptually oriented and involves term
extraction, manual or increasingly automatic, and how the results of term extraction are
analysed and structured; relatively less emphasis had been put on how this information is
presented to the user in paper or electronic form.
The second approach can be characterized as user orientated by definition, the functions being
those the user needs.
the paper encyclopaedia clearly belongs to the past. It will probably not be long until the paper
specialised dictionary is also consigned to history. The forms that the electronic dictionary may
take will be many and varied, doubtlessly even more so than the paper predecessor. Many of
the functions previously played by the specialised dictionary may well migrate to other
environments: Wikipedia, translation memory and simply the use of hyperlinks to access
knowledge where it is available.
The trend to incorporate dictionaries or dictionary functions into other applications or tools
affects specialised as well as general lexicography. As Tarp suggests, dictionaries may function
as “how-to’s”, books giving instructions on how to do specific things such as how to operate
machines.
Whatever the form taken, it can be assumed that many of the building blocks which can be
assembled according to need from a centralized platform will be simply developments of
terms, equivalents, language and conceptual relations, which were already present in more
rigid forms in the specialised paper dictionary.
Twenty-five years later, the representational model is still used in both terminology and
knowledge engineering, and the corpus is still the departure point of study for both disciplines.
But the methods and the aims of the two disciplines have evolved.
Now the aim of knowledge engineering is mainly to build ontologies from texts using large
corpora.
From a linguistic point of view, although TKBs continue to be built (and, especially, conceptual
networks), new needs emerged which the systematic study of terms and their contexts can
satisfactorily meet.
Two kinds of contexts may then be used. The first belong to a top-down approach: linguistic
patterns linked to the need are defined a priori and searched in co-occurrence with a
(candidate-)term in the corpus.
The second ones, belonging to a bottom-up approach, consist in choosing and interpreting the
contexts of (candidate-)terms depending on the final aim.
Whereas the main initial aim of TKBs was to analyse texts in order to define terms, new aims
lead mainly to using terms in order to approach texts.
In both cases, the tool-assisted methods are very similar, and in both cases, the main issue,
from a linguistic point of view, concerns the study of the semantic link between terms and
texts.
Features
Frequency refers to how often a word appears in a dictionary’s database and can be used to
determine whether or not to use a word in a certain context. Each dictionary determines
frequency differently; some use a corpus, which searches word frequency in a collection of
authentic texts, others use an aggregated word list that may also take a word’s usefulness into
account.
Grammatical information can appear as word forms, word classes, and grammatical structures.
This information should be read for each definition and used in combination with examples to
gain a better understanding of what structures have which meanings. For example, “divide”
has multiple meanings, but only reading the first definition or only one example leads to the
possibility of generating a grammatical sentence with an unintended meaning.
Definitions are numbered and paired with grammatical information. This helps distinguish how
grammatical structure can change the meaning of a sentence. Read all of the definitions to get
a better sense of how the examples fit.
Examples use authentic language to show how the word is used in context for a specific
definition. Sometimes these include useful word combinations. Example sentences can also be
used to test whether a synonym has a similar meaning and structure.
Thesaurus info helps vocabulary development by providing synonyms and antonyms but
should be used carefully. No two words have exactly the same meaning so synonyms should
also be looked up. Even if the definitions of two words are similar, their word class or the
context in which they appear may be different.
Collocations are natural sounding word combinations, such as verbs + prepositions or verbs +
nouns. Learner’s dictionaries use corpora to list frequent collocations and match these with
one of the definitions provided. This is one of the most useful features of learner’s dictionaries
and makes writing in new contexts easier.
Idioms are sometimes provided and are important to distinguish from a word’s grammatical
use, where meaning is predictable. Idioms are often cultural and cannot always be understood
literally.
Context labels and other usage notes take advantage of a learner’s dictionary’s corpus to
identify any genre (e.g., business, medicine, law), style (e.g., informal, technical), or English
dialect/accent (British or American) that a word most frequently appears in. Some words have
new definitions when in another context.
main features of online MELDs are at the forefront of a new business model based on free
access and user participation. Some of these features are innovative, but there is still great
potential for the new online medium to extend what has previously been possible. Research is
now needed to see how effective any new approaches might be.
Several features of the online medium remain to be explored. Videos, for example, are a
possibility on the Internet. Whereas some researchers caution against their use, others have
found that learners do welcome video illustrations.
Lew and Doroszewska have found that animated graphics do not help vocabulary retention,
but the same may not be true for vocabulary decoding. Videos could usefully be included to
demonstrate movement, for example, on the basis that if a picture is worth a thousand words
then a moving picture would be worth even more, and seeing a kangaroo jumping in a video
would give an insight into its distinctive movement that is impossible to describe in words.
Videos, or at least audio files, could also be used to demonstrate different sounds. MED online
already does this for certain musical instruments (e.g. guitar), bird sounds (e.g. kookaburra)
and the weather (e.g. rain).
However, these audio files would be more meaningful if attached to a moving image. One
drawback is that links to existing Internet files are unstable, and so a dictionary would need to
produce its own video files. This would of course be time-consuming, but could be done
gradually, perhaps with the use of crowdsourced contributions.
Although there is a lack of teacher awareness generally with regard to the richness of a MELD
in language instruction, online MELDs make it possible to be innovative in teaching dictionary
use.
If a class takes place in a room where Internet access is possible, the teacher can explain
different features of a MELD and have students try out each of these features in turn. There is
always the risk that students will use the occasion to surf the Web instead of doing what the
teacher suggests, but free student response systems (such as Socrative) allow the students to
post their answers to questions in class online, and the teacher can check that everyone has
responded.
Possible lessons include familiarising students with the parts of a dictionary entry to gain a
fuller understanding of a word, prompting them to search for irregular plurals, encouraging
them to compare entries in different dictionaries, getting them to search for collocations and
giving them a short online quiz about areas such as noun countability.
The world of online MELDs thus presents limitless opportunities but requires a new way of
thinking. Instead of regarding online MELDs as Internet versions of paper-based originals, it
would be better to start with a completely fresh slate (as it were) and think of what the user
needs and expects in a digital world.
Online MELDs are catching up with the online market slowly, but the market is moving on
quickly. In 10 years’ time, the way we access and use the Internet may have changed
completely. Is pedagogical lexicography ready for the challenge?
24. FrameNet
FRAME SEMANTICS- is the most articulate model with which cognitive semantics implements
the idea that our knowledge of the world is organized in larger `chunks of knowledge', and that
language can only be properly understood against the background of that world knowledge.
specifically interested in the way in which language may be used to perspectivize an underlying
conceptualization of the world: it's not just that we see the world in terms of conceptual
models, but those models may be verbalized in different ways.
The data that FrameNet has analyzed show that the sentence "carla sold an apple to joey"
essentially describes the same basic situation (semantic frame) as "joey bought an apple from
Carla", just from a different perspective. A semantic frame is a conceptual structure describing
an event, relation, or object along with its participants.
FrameNet is a rich, yet still evolving lexical resource that seeks to apply the theory of Frame
Semantics to the analysis of the English lexicon.
It has attracted interest from diverse research communities, ranging from theoretical
linguistics to natural language processing applications.
In its original domain of lexicography, FrameNet has inspired many sister projects in other
languages and is seeing new specialized uses, such as foreign language instruction.
In the field of natural language processing FrameNet is used as a resource in a great number
of applications, notably semantic role labelling.
In addition, many efforts have been and are being undertaken to (semi-)automatically expand
frame semantic resources or construct them from scratch.
Such efforts, on the one hand, underscore the keen interest in using frame semantic
annotations and lexicon resources, but on the other reflect the challenges and limitations of
pursuing large coverage only through the manual work of linguistic experts.
The next phase of frame semantic development will therefore seek to speed up the progress of
frame semantic resources by combining the work of human experts with that of laypeople
while also attempting to better leverage the growing analytical powers of automatic systems.
29. Wordnik
Wordnik is an online-only dictionary of English, whose mission is to collect and share all the
words of English. Instead of employing a staff of lexicographers and editors, or organizing a
community of wiki-style contributors to create definitions, Wordnik uses data-mining and
machine-learning techniques to find example sentences that serve as brief explanations of a
word’s meaning in context.
Wordnik uses existing lexicographic data for creating its traditional dictionary articles. These
data are either copyright-free or acquired by Wordnik.
Wordnik was initially a commercially oriented lexicographic project. Since then, it has
adapted its business model to market conditions and innovations. It was initially financed by
seed capital but is now under the aegis of a non-profit organization and raises operational
funds from advertisements, individuals, and institutions whose contributions come in many
forms and sizes. For instance, some users adopt a word and pay some money in exchange for
having their names posted on their word’s main page.
Wordnik defends the necessity of having a lexicographic theory underlying the conception of
any dictionary. In particular, Wordnik was created on the basis of two related hypotheses: (a)
Wordnik must collect and share all the words of English, especially as it was assumed at the
inception of the project that most existing dictionaries did not describe the words people
tended to look up; and (b) Wordnik must describe each meaning of each English word in
context.
Not only have publishers put their dictionaries, in all their variety, online, but the Internet has
also affected how dictionary making itself is carried out.
The World Wide Web represents a vast sea of data that lexicographers can search for evidence
of word usage, or indeed for new words and new uses of words. The Oxford University Press
dictionaries department, for example, has compiled the over 2-billion-word Oxford English
Corpus from the Internet for this purpose.
The most significant issue is that of accessibility: how the user gets to the information that
satisfies their need as efficiently and effectively as possible. Granger (2012: 3) has identified six
significant innovations or developments (Eide 2014) that take online lexicography beyond the
print medium: corpus integration,- . Most, if not all, publishers of dictionaries of English use a
corpus, that is, a searchable collection of texts stored electronically, either to inform ongoing
development and revision of exisiting dictionaries (corpus-based lexicography) or as the basic
raw data from which the dictionary is compiled (corpus-driven lexicography). Collins English
Dictionary would be an example of the first type, and the Macmillan Dictionary an example of
the second.
What is happening to English lexicography in the Internet era? It seems probable that
“printbased dictionaries will disappear as content migrates to e-dictionaries” , so that “the
days of most authoritative, monolingual print dictionaries may be numbered”, a view shared
by Béjoint.
One result of the migration of dictionaries to the Internet is that “the traditional dividing line
between dictionaries and other kinds of resources is more and more difficult to draw”.
Moreover, people tend to resort to Google for answers to all their information queries, and
“this tendency presents a real threat to more specialized reference tools, including
dictionaries”.
The Macmillan Dictionary, predicted as their OED Third Edition will exist only online; and all
major dictionary publishers have online versions of their dictionaries, which are accessible at
no cost to the user.
A Google search for a term or expression may index a dictionary, but it may not be near the
top of the list and the dictionary indexed may not be the most reliable.
Dictionary publishers face a challenge to educate and persuade the Internet-using public to
use a reliable dictionary when looking up a word or expression, rather than do a general
Google search.
For dictionaries aimed at learners of English, that may be easier than for native-speaker
dictionaries.
Some scholars view the wholesale move to the Internet as not in the best interests of
lexicography. In the context of historical dictionary projects, arguing that long entries in
particular are harder to consult online than on the printed page and that online dictionaries
require “more active maintenance”.
As well as expressing doubts about crowdsourced online dictionaries, some scholars ask
whether the Internet is “as secure and durable a medium as the printed page”.
Certainly, for historians of the language, the ability to compare successive editions of a
dictionary will be severely jeopardised. We are not yet at an online-only scenario for English
lexicography, and it may be many years before that comes about; and some scholars believe it
may take longer than we think.
The most obvious advantage of the electronic medium manifests itself in simplified access
structures, which make the electronic dictionary look-up time-effective and accurate.
The integration of electronic dictionaries into educational software tools as well as the
potential for dictionary customization and personalization encourage autonomy in language
learners.
Gamification possibly adds an element of joy to the otherwise complex task of solving
language problems. However, ease, speed and user-friendliness do not always go hand in hand
with top quality. The electronic medium implies freedom and the (often false) sense of security
and anonymity, which may prompt amateurish lexicographic contributions (or even creations).
This, in turn, undermines dictionary authority and reliability. There is no denying that thanks to
the electronic medium, the user has come to the fore. The user of the electronic dictionary is
not merely the subject of research, but also the contributor of lexicographic content,
commentator, critic and enquirer, whose voice is welcome and heard.
There is a considerable need to discuss the quality of electronic dictionaries, decide on criteria
for evaluating their functionalities and assess user-generated content. These days, electronic
dictionaries are not compiled only by teams of expert lexicographers. advanced and intelligent
software, machines, web designers, corpus linguists and users are playing an increasingly
important role.
Besides, the underlying goal of creating dictionaries in the electronic form is not linguistic
research, but actual usage, thanks to the Internet – on a global scale.
Unfortunately, it is possible to venture a statement that not much is known for sure about the
actual usefulness of dictionaries in the electronic medium .
At the same time, a multitude of lexical questions are researched on the Internet, presumably
many more than ever in printed dictionaries.
For example, we observed in the log-file data of the German Wiktionary that an uncommon
word used in a sports commentary in a friendly match between the French and German
football national team led to a sharp increase in look-ups within the same hour. Who would
have used a printed dictionary in such a situation?
However, the question is how these lexical needs and the questions people have can be most
effectively united with existing lexicographic resources.
In the view of the author of this chapter, empirical data from dictionary usage studies can
contribute essential data to analyse the needs of potential users, to bring them together with
existing lexicographic tools and to point the way to future developments.
The latter is particularly important, because the dictionary landscape will continue to change:
While it is fairly uncontroversial that people will continue to have lexical needs in natural
communication as well as in more or less artificial learning contexts, it is much less certain that
dictionaries will persist for much longer, at least in the form as we know them today.
Rather, it seems likely that dictionaries will increasingly become absorbed into more general
digital tools designed to provide assistance with communication, expression, and information
searching. Therefore, dictionary usage research in the Internet era still has a lot of empirical
work to do.
CHAPTER 8
8/1- we are not used to reading whole entries through in dictionaries. We are usually
searching for some specific information, and once we have found it we ignore the rest. This
exercise should have given you a good impression not only of how your dictionary handles the
various types of lexical information, but also of some of the considerations that lexicographers
have to weigh up in deciding how to present the information.
8/2- Pronunciation: the full IPA transcription, including primary and secondary stress, is
given for the verb headword and for the adjective, which has a different stress pattern from
the verb, resulting in a different final vowel. The run-on adverb and noun have their stress
pattern indicated in the spelling.
Morphology: the verb and adjective are shown to have the same form, the adverb and noun
derivations are given as run-ons. No inflectional information is given.
Syntax: word classes are given for all forms. Senses 1,3 and 4 of the verb are marked as
‘intransitive’; by implication sense 2 may be used both intransitively and transitively. The
typical following prepositions are given for senses 1 and 2, and the examples for sense 2
illustrate the absence and presence of the preposition between.
8/3- The CED tends to provide a greater differentiation of senses; the LDEL uses sub-
divisions extensively. The same basic meanings are identified, but there is still considerable
variation.
8/4- There are arguments both for including and for excluding etymology from
dictionaries. You might ask yourself and others how often you look up etymological
information, except for academic purposes, and then you would most likely go to an historical
dictionary or a specialist etymological dictionary. Ellegard argues that etymology should be
included in learners dictionaries, from which it is excluded, on the basis that it may provide
useful clues that will help in learning new vocabulary items.
8/7- You have probably found that overall your dictionary provides a fair lexical
description, but that it has a few gaps (especially in grammatical and usage info), and perhaps
even some superfluous information.