0% found this document useful (0 votes)
50 views19 pages

Language Analysis and Understanding

The document discusses semantics and its applications in natural language processing. It covers basic notions of semantics including meaning as truth conditions. It also discusses practical applications that require semantic processing like question answering and language generation. The development of semantic theory is discussed, moving from first-order logic to richer logical languages to better model natural languages.

Uploaded by

Siddharth Pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views19 pages

Language Analysis and Understanding

The document discusses semantics and its applications in natural language processing. It covers basic notions of semantics including meaning as truth conditions. It also discusses practical applications that require semantic processing like question answering and language generation. The development of semantic theory is discussed, moving from first-order logic to richer logical languages to better model natural languages.

Uploaded by

Siddharth Pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/2336246

Language Analysis and Understanding

Article · January 1997


Source: CiteSeer

CITATIONS READS
9 4,363

3 authors, including:

Annie Zaenen Hans Uszkoreit


Stanford University Deutsches Forschungszentrum für Künstliche Intelligenz
90 PUBLICATIONS 4,253 CITATIONS 417 PUBLICATIONS 4,255 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Hans Uszkoreit on 21 June 2016.

The user has requested enhancement of the downloaded file.


Chapter 3

Language Analysis and Understanding

(Following section is taken from Chapter 3 “Language Analysis and Understanding”)


of the book: “Survey of the state of the art in human language technology”)

3.5 Semantics1

Stephen G. Pulman
SRI International, Cambridge, UK
and University of Cambridge Computer Laboratory, Cambridge, UK

3.5.1 Basic Notions of Semantics

A perennial problem in semantics is the delineation of its subject matter. The term meaning can be used in a variety
of ways, and only some of these correspond to the usual understanding of the scope of linguistic or computational
semantics. We shall take the scope of semantics to be restricted to the literal interpretations of sentences in a
context, ignoring phenomena like irony, metaphor, or conversational implicature (Grice, 1975; Levinson, 1983).
A standard assumption in computationally oriented semantics is that knowledge of the meaning of a sentence
can be equated with knowledge of its truth conditions: that is, knowledge of what the world would be like if the
sentence were true. This is not the same as knowing whether a sentence is true, which is (usually) an empirical
matter, but knowledge of truth conditions is a prerequisite for such verification to be possible. Meaning as truth
conditions needs to be generalized somewhat for the case of imperatives or questions, but is a common ground
among all contemporary theories, in one form or another, and has an extensive philosophical justification, e.g.,
Davidson (1969); Davidson (1973).
1 This survey draws in part on material prepared for the European Commission LRE Project 62-051, FraCaS: A Framework for Computa-

tional Semantics. I am grateful to the other members of the project for their comments and contributions.

1
2 Chapter 3: Language Analysis and Understanding

A semantic description of a language is some finitely stated mechanism that allows us to say, for each sentence
of the language, what its truth conditions are. Just as for grammatical description, a semantic theory will character-
ize complex and novel sentences on the basis of their constituents: their meanings, and the manner in which they
are put together. The basic constituents will ultimately be the meanings of words and morphemes. The modes of
combination of constituents are largely determined by the syntactic structure of the language. In general, to each
syntactic rule combining some sequence of child constituents into a parent constituent, there will correspond some
semantic operation combining the meanings of the children to produce the meaning of the parent.
A corollary of knowledge of the truth conditions of a sentence is knowledge of what inferences can be legiti-
mately drawn from it. Valid inference is traditionally within the province of logic (as is truth) and mathematical
logic has provided the basic tools for the development of semantic theories. One particular logical system, first
order predicate calculus (FOPC), has played a special role in semantics (as it has in many areas of computer science
and artificial intelligence). FOPC can be seen as a small model of how to develop a rigorous semantic treatment for
a language, in this case an artificial one developed for the unambiguous expression of some aspects of mathematics.
The set of sentences or well formed formulae of FOPC are specified by a grammar, and a rule of semantic interpre-
tation is associated with each syntactic construct permitted by this grammar. The interpretations of constituents are
given by associating them with set-theoretic constructions (their denotation) from a set of basic elements in some
universe of discourse. Thus, for any of the infinitely large set of FOPC sentences we can give a precise description
of its truth conditions, with respect to that universe of discourse. Furthermore, we can give a precise account of
the set of valid inferences to be drawn from some sentence or set of sentences, given these truth conditions, or
(equivalently, in the case of FOPC) given a set of rules of inference for the logic.

3.5.2 Practical Applications of Semantics

Some natural language processing tasks (e.g., message routing, textual information retrieval, translation) can be
carried out quite well using statistical or pattern matching techniques that do not involve semantics in the sense
assumed above. However, performance on some of these tasks improves if semantic processing is involved. (Not
enough progress has been made to see whether this is true for all of the tasks).
Some tasks, however, cannot be carried out at all without semantic processing of some form. One important
example application is that of database query, of the type chosen for the Air Travel Information Service (ATIS)
task (DARPA, 1989). For example, if a user asks, “Does every flight from London to San Francisco stop over in
Reykjavik?” then the system needs to be able to deal with some simple semantic facts. Relational databases do not
store propositions of the form every X has property P and so a logical inference from the meaning of the sentence
is required. In this case, every X has property P is equivalent to there is no X that does not have property P and
a system that knows this will also therefore know that the answer to the question is no if a non-stopping flight is
found and yes otherwise.
Any kind of generation of natural language output (e.g., summaries of financial data, traces of KBS system
operations) usually requires semantic processing. Generation requires the construction of an appropriate meaning
representation, and then the production of a sentence or sequence of sentences which express the same content in
a way that is natural for a reader to comprehend, e.g., McKeown, Kukich, et al. (1994). To illustrate, if a database
lists a 10 a.m. flight from London to Warsaw on the 1st–14th, and 16th–30th of November, then it is more helpful
to answer the question What days does that flight go? by Every day except the 15th instead of a list of 30 days of
the month. But to do this the system needs to know that the semantic representations of the two propositions are
equivalent.
3.5 Semantics 3

3.5.3 Development of Semantic Theory

It is instructive, though not historically accurate, to see the development of contemporary semantic theories as
motivated by the deficiencies that are uncovered when one tries to take the FOPC example further as a model
for how to do natural language semantics. For example, the technique of associating set theoretic denotations
directly with syntactic units is clear and straightforward for the artificial FOPC example. But when a similar
programme is attempted for a natural language like English, whose syntax is vastly more complicated, the statement
of the interpretation clauses becomes in practice extremely baroque and unwieldy, especially so when sentences
that are semantically but not syntactically ambiguous are considered (Cooper, 1983). For this reason, in most
semantic theories, and in all computer implementations, the interpretation of sentences is given indirectly. A
syntactically disambiguated sentence is first translated into an expression of some artificial logical language, where
this expression in its turn is given an interpretation by rules analogous to the interpretation rules of FOPC. This
process factors out the two sources of complexity whose product makes direct interpretation cumbersome: reducing
syntactic variation to a set of common semantic constructs; and building the appropriate set-theoretical objects to
serve as interpretations.
The first large scale semantic description of this type was developed by Montague (1973). Montague made
a further departure from the model provided by FOPC in using a more powerful logic (intensional logic) as an
intermediate representation language. All later approaches to semantics follow Montague in using more powerful
logical languages: while FOPC captures an important range of inferences (involving, among others, words like
every, and some as in the example above), the range of valid inference patterns in natural languages is far wider.
Some of the constructs that motivate the use of richer logics are sentences involving concepts like necessity or
possibility and propositional attitude verbs like believe or know, as well as the inference patterns associated with
other English quantifying expressions like most or more than half, which cannot be fully captured within FOPC
(Barwise & Cooper, 1981).
For Montague, and others working in frameworks descended from that tradition (among others, Partee, e.g.,
Partee, 1986, Krifka, e.g., Krifka, 1989, and Groenendijk and Stokhof, e.g., Groenendijk & Stokhof, 1984; Groenendijk & Stokhof
the intermediate logical language was merely a matter of convenience which could, in principle, always be dis-
pensed with provided the principle of compositionality was observed. (I.e., The meaning of a sentence is a function
of the meanings of its constituents, attributed to Frege, (Frege, 1892)). For other approaches, (e.g., Discourse Rep-
resentation Theory, Kamp, 1981) an intermediate level of representation is a necessary component of the theory,
justified on psychological grounds, or in terms of the necessity for explicit reference to representations in order to
capture the meanings of, for example, pronouns or other referentially dependent items, elliptical sentences or sen-
tences ascribing mental states (beliefs, hopes, intentions). In the case of computational implementations, of course,
the issue of the dispensability of representations does not arise: for practical purposes, some kind of meaning
representation is a sine qua non for any kind of computing.

3.5.4 Discourse Representation Theory

Discourse Representation Theory (DRT) (Kamp, 1981; Kamp & Reyle, 1993), as the name implies, has taken the
notion of an intermediate representation as an indispensable theoretical construct, and, as also implied, sees the
main unit of description as being a discourse rather than sentences in isolation. One of the things that makes a
sequence of sentences constitute a discourse is their connectivity with each other, as expressed through the use of
pronouns and ellipsis or similar devices. This connectivity is mediated through the intermediate representation,
however, and cannot be expressed without it. The kind of example that is typically used to illustrate this is the
following:
A computer developed a fault.
4 Chapter 3: Language Analysis and Understanding

A simplified first order representation of the meaning of this sentence might be:
exists(X,computer(X) and develop a fault(X))
There is a computer X and X developed a fault. This is logically equivalent to:
not(forall(X,not(computer(X) and develop a fault(X))))
It isn’t the case that every computer didn’t develop a fault. However, whereas the first sentence can be continued
thus:
A computer developed a fault.
It was quickly repaired.
—its logically equivalent one cannot be:
It isn’t the case that every computer didn’t develop a fault.
It was quickly repaired.
Thus, the form of the representation has linguistic consequences. DRT has developed an extensive formal
description of a variety of phenomena such as this, while also paying careful attention to the logical and com-
putational interpretation of the intermediate representations proposed. Kamp and Reyle (1993) contains detailed
analyses of aspects of noun phrase reference, propositional attitudes, tense and aspect, and many other phenomena.

3.5.5 Dynamic Semantics

Dynamic semantics (e.g., Groenendijk & Stokhof, 1991a; Groenendijk & Stokhof, 1991b) takes the view that the
standard truth-conditional view of sentence meaning deriving from the paradigm of FOPC does not do sufficient
justice to the fact that uttering a sentence changes the context it was uttered in. Deriving inspiration in part from
work on the semantics of programming languages, dynamic semantic theories have developed several variations
on the idea that the meaning of a sentence is to be equated with the changes it makes to a context.
Update semantics (e.g., Veltman, 1985; van Eijck & de Vries, 1992) approaches have been developed to model
the effect of asserting a sequence of sentences in a particular context. In general, the order of such a sequence has
its own significance. A sequence like:
Someone’s at the door. Perhaps it’s John. It’s Mary!
is coherent, but not all permutations of it would be:
Someone’s at the door. It’s Mary. Perhaps it’s John.
Recent strands of this work make connections with the artificial intelligence literature on truth maintenance and
belief revision (e.g Gärdenfors, 1990).
Dynamic predicate logic (Groenendijk & Stokhof, 1991a; Groenendijk & Stokhof, 1990) extends the interpre-
tation clauses for FOPC (or richer logics) by allowing assignments of denotations to subexpressions to carry over
from one sentence to its successors in a sequence. This means that dependencies that are difficult to capture in
FOPC or other non-dynamic logics, such as that between someone and it in:
Someone’s at the door. It’s Mary.
can be correctly modeled, without sacrificing any of the other advantages that traditional logics offer.
3.5 Semantics 5

3.5.6 Situation Semantics and Property Theory

One of the assumptions of most semantic theories descended from Montague is that information is total, in the
sense that in every situation, a proposition is either true or it is not. This enables propositions to be identified
with the set of situations (or possible worlds) in which they are true. This has many technical conveniences, but is
descriptively incorrect, for it means that any proposition conjoined with a tautology (a logical truth) will remain the
same proposition according to the technical definition. But this is clearly wrong: all cats are cats is a tautology, but
The computer crashed, and The computer crashed and all cats are cats are clearly different propositions (reporting
the first is not the same as reporting the second, for example).
Situation theory (Barwise & Perry, 1983) has attempted to rework the whole logical foundation underlying
the more traditional semantic theories in order to arrive at a satisfactory formulation of the notion of a partial
state of the world or situation, and in turn, a more satisfactory notion of proposition. This reformulation has
also attempted to generalize the logical underpinnings away from previously accepted restrictions (for example,
restrictions prohibiting sets containing themselves, and other apparently paradoxical notions) in order to be able to
explore the ability of language to refer to itself in ways that have previously resisted a coherent formal description
(Barwise & Etchemendy, 1987).
Property theory (Turner, 1988; Turner, 1992) has also been concerned to rework the logical foundations pre-
supposed by semantic theory, motivated by similar phenomena.
In general, it is fair to say that, with a few exceptions, the contribution of dynamic semantics, situation theory,
and property theory has so far been less in the analysis of new semantic phenomena than in the exploration of
more cognitively and computationally plausible ways of expressing insights originating within Montague-derived
approaches. However, these new frameworks are now making it possible to address data that resisted any formal
account by more traditional theories.

3.5.7 Implementations

Whereas there are beginning to be quite a number of systems displaying wide syntactic coverage, there are very
few that are able to provide corresponding semantic coverage. Almost all current large scale implementations
of systems with a semantic component are inspired to a greater or lesser extent by the work of Montague (e.g.,
Bates, Bobrow, et al., 1994; Allen, Schubert, et al., 1995; Alshawi, 1992). This reflects the fact that the majority
of descriptive work by linguists is expressed within some form of this framework, and also the fact that its compu-
tational properties are better understood.
However, Montague’s own work gave only a cursory treatment of a few context-dependent phenomena like
pronouns, and none at all of phenomena like ellipsis. In real applications, such constructs are very common and
all contemporary systems supplement the representations made available by the base logic with constructs for
representing the meaning of these context-dependent constructions. It is computationally important to be able to
carry out at least some types of processing directly with these underspecified representations: i.e., representations
in which the contextual contribution to meaning has not yet been made explicit, in order to avoid a combinatorial
explosion of potential ambiguities. One striking motivation for underspecification is the case of quantifying noun
phrases, for these can give rise to a high degree of ambiguity if treated in Montague’s fashion. For example, every
keyboard is connected to a computer is interpretable as involving either a single computer or a possibly different
one for each keyboard, in the absence of a context to determine which is the plausible reading: sentences do not
need to be much more complex for a large number of possibilities to arise.
One of the most highly developed of the implemented approaches addressing these issues is the quasi-logical
6 Chapter 3: Language Analysis and Understanding

form developed in the Core Language Engine (CLE) (Alshawi, 1990; Alshawi, 1992) a representation which al-
lows for meanings to be of varying degrees of independence of a context. This makes it possible for the same
representation to be used in applications like translation, which can often be carried out without reference to con-
text, as well as in database query, where the context-dependent elements must be resolved in order to know exactly
which query to submit to the database. The ability to operate with underspecified representations of this type is
essential for computational tractability, since the task of spelling out all of the possible alternative fully specified
interpretations for a sentence and then selecting between them would be computationally intensive even if it were
always possible in practice.

3.5.8 Future Directions

Currently, the most pressing needs for semantic theory are to find ways of achieving wider and more robust cover-
age of real data. This will involve progress in several directions: (i) Further exploration of the use of underspecified
representations so that some level of semantic processing can be achieved even where complete meaning represen-
tations cannot be constructed (either because of lack of coverage or inability to carry out contextual resolution).
(ii) Closer cooperation with work in lexicon construction. The tradition in semantics has been to assume that word
meanings can by and large simply be plugged in to semantic structures. This is a convenient and largely correct
assumption when dealing with structures like every X is P, but becomes less tenable as more complex phenomena
are examined. However, the relevant semantic properties of individual words or groups of words are seldom to be
found in conventional dictionaries and closer cooperation between semanticists and computationally aware lexi-
cographers is required. (iii) More integration between sentence or utterance level semantics and theories of text or
dialogue structure. Recent work in semantics has shifted emphasis away from the purely sentence-based approach,
but the extent to which the interpretations of individual sentences can depend on dialogue or text settings, or on the
goals of speakers, is much greater than had been suspected.

3.6 Chapter References

ACL (1983). Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, Cambridge,
Massachusetts. Association for Computational Linguistics.
ACL (1990). Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, Pittsburgh,
Pennsylvania. Association for Computational Linguistics.
ACL (1992). Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, University
of Delaware. Association for Computational Linguistics.
ACL (1993). Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, Ohio State
University. Association for Computational Linguistics.
Ades, A. E. and Steedman, M. J. (1982). On the order of words. Linguistics and Philosophy, 4(4):517–558.
Allen, J., Hunnicutt, M. S., and Klatt, D. (1987). From text to speech—the MITalk system. MIT Press, Cambridge,
Massachusetts.
Allen, J. F., Schubert, L. K., Ferguson, G., Heeman, P., Hwang, C. H., Kato, T., Light, M., Martin, N., Miller, B.,
Poesio, M., and Traum, D. R. (1995). The TRAINS project: a case study in building a conversational planning
agent. Journal of Experimental and Theoretical AI.
Alshawi, H. (1990). Resolving quasi logical form. Computational Linguistics, 16:133–144.
3.6 Chapter References 7

Alshawi, H., editor (1992). The Core Language Engine. MIT Press, Cambridge, Massachusetts.

Alshawi, H., Arnold, D. J., Backofen, R., Carter, D. M., Lindop, J., Netter, K., Pulman, S. G., and Tsujii, J.-I.
(1991). Rule formalism and virutal machine design study. Technical Report ET6/1, CEC.

Alshawi, H. and Carter, D. (1994). Training and scaling preference functions for disambiguation. Computational
Linguistics, 20:635–648.

ANLP (1994). Proceedings of the Fourth Conference on Applied Natural Language Processing, Stuttgart, Ger-
many. ACL, Morgan Kaufmann.

Antworth, E. L. (1990). PC-KIMMO: a two-level processor for morphological analysis. Technical Report Occa-
sional Publications in Academic Computing No. 16, Summer Institute of Linguistics, Dallas, Texas.

Appel, A. W. and Jacobson, G. J. (1988). The world’s fastest scrabble program. Communications of the ACM,
31(5):572–578.

ARPA (1993). Proceedings of the 1993 ARPA Human Language Technology Workshop, Princeton, New Jersey.
Advanced Research Projects Agency, Morgan Kaufmann.

Atkins, B. T. S. and Levin, B. (1992). Admitting impediments. In Lexical Acquisition: Using On-Line Resources
to Build a Lexicon. Lawrence Earlbaum, Hillsdale, New Jersey.

Bahl, L. R., Jelinek, F., and Mercer, R. L. (1983). A maximum likelihood approach to continuous speech recogni-
tion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(2):179–190.

Baker, J. K. (1979). Trainable grammars for speech recognition. In Wolf, J. J. and Klatt, D. H., editors, Speech
communication papers presented at the 97th Meeting of the Acoustical Society of America, pages 547–550.
Acoustical Society of America, MIT Press.

Barwise, J. and Cooper, R. (1981). Generalized quantifiers and natural language. Linguistics and Philosophy,
4:159–219.

Barwise, J. and Etchemendy, J. (1987). The Liar. Chicago University Press, Chicago.

Barwise, J. and Perry, J. (1983). Situations and Attitudes. MIT Press, Cambridge, Massachusetts.

Bates, M., Bobrow, R., Ingria, R., and Stallard, D. (1994). The delphi natural language understanding system. In
Proceedings of the Fourth Conference on Applied Natural Language Processing, pages 132–137, Stuttgart,
Germany. ACL, Morgan Kaufmann.

Baum, L. E. and Petrie, T. (1966). Statistical inference for probablistic functions of finite state Markov chains.
Annals of Mathematical Statistics, 37:1554–1563.

Berwick, R. C., Abney, S. P., and Tenny, C., editors (1992). Principle-Based Parsing: Computation and Psycholin-
guistics. Kluwer, Dordrecht, The Netherlands.

Black, E., Garside, R., and Leech, G., editors (1993). Statistically-Driven Computer Grammars of English: The
IBM/Lancaster Approach. Rodopi, Amsterdam, Atlanta.

Black, E., Jelinek, F., Lafferty, J., Magerman, D. M., Mercer, D., and Roukos, S. (1993). Towards history-based
grammars: Using richer models for probabilistic parsing. In Proceedings of the 31st Annual Meeting of the As-
sociation for Computational Linguistics, pages 31–37, Ohio State University. Association for Computational
Linguistics.
8 Chapter 3: Language Analysis and Understanding

Black, E., Lafferty, J., and Roukos, S. (1992). Development and evaluation of a broad-coverage probablistic gram-
mar of English-language computer manuals. In Proceedings of the 30th Annual Meeting of the Association
for Computational Linguistics, pages 185–192, University of Delaware. Association for Computational Lin-
guistics.
Bod, R. (1993). Using an annotated corpus as a stochastic parser. In Proceedings of the Sixth Conference of the
European Chapter of the Association for Computational Linguistics, pages 37–44, Utrecht University, The
Netherlands. European Chapter of the Association for Computational Linguistics.
Booth, T. L. and Thompson, R. A. (1973). Applying probability measures to abstract languages. IEEE Transactions
on Computers, C-22(5):442–450.
Bouma, G., Koenig, E., and Uszkoreit, H. (1988). A flexible graph-unification formalism and its application to
natural-language processing. IBM Journal of Research and Development.
Bresnan, J., editor (1982). The Mental Representation of Grammatical Relations. MIT Press, Cambridge, Mas-
sachusetts.
Briscoe, E. J. (1992). Lexical issues in natural language processing. In Klein, E. and Veltman, F., editors, Natural
Language and Speech, pages 39–68. Springer-Verlag.
Briscoe, E. J. (1994). Prospects for practical parsing: robust statistical techniques. In de Haan, P. and Oostdijk, N.,
editors, Corpus-based Research into Language: A Feschrift for Jan Aarts, pages 67–95. Rodopi, Amsterdam.
Briscoe, E. J. and Carroll, J. (1993). Generalized probabilistic LR parsing of natural language (corpora) with
unification-based grammars. Computational Linguistics, 19(1):25–59.
Briscoe, E. J. and Waegner, N. (1993). Undergeneration and robust parsing. In Meijs, W., editor, Proceedings of
the ICAME Conference, Amsterdam. Rodopi.
Carpenter, B. (1992). ALE—the attribute logic engine user’s guide. Technical report, Carnegie Mellon University,
Carnegie Mellon University, Pittsburgh, Pennsylvania.
Carpenter, B. (1992). The Logic of Typed Feature Structures, volume 32 of Cambridge Tracts in Theoretical
Computer Science. Cambridge University Press.
Church, K. (1988). A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the
Second Conference on Applied Natural Language Processing, pages 136–143, Austin, Texas. ACL.
COLING (1994). Proceedings of the 15th International Conference on Computational Linguistics, Kyoto, Japan.
Cooper, R. (1983). Quantification and Syntactic Theory. Reidel, Dordrecht.
Copestake, A. and Briscoe, E. J. (1992). Lexical operations in a unification based framework. In Pustejovsky, J.
and Bergler, S., editors, Lexical Semantics and Knowledge Representation. Springer-Verlag, Berlin.
Cutting, D., Kupiec, J., Pedersen, J., and Sibun, P. (1992). A practical part of speech tagger. In Proceedings of the
3rd Conference on Applied Language Processing, pages 133–140, Trento, Italy.
Dagan, I., Markus, S., and Markovitch, S. (1993). Contextual word similarity and estimation from sparse data.
In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 164–171,
Ohio State University. Association for Computational Linguistics.
DARPA (1989). Proceedings of the Second DARPA Speech and Natural Language Workshop, Cape Cod, Mas-
sachusetts. Defense Advanced Research Projects Agency.
3.6 Chapter References 9

DARPA (1991). Proceedings of the Fourth DARPA Speech and Natural Language Workshop, Pacific Grove, Cali-
fornia. Defense Advanced Research Projects Agency, Morgan Kaufmann.
Davidson, D. (1969). Truth and meaning. In Davis, J. W. et al., editors, Philosophical, pages 1–20. Hingham.
Davidson, D. (1973). In defense of Convention T. In Leblanc, H., editor, Truth, Syntax and Modality, pages 76–85.
North Holland.
de Marcken, C. (1990). Parsing the LOB corpus. In Proceedings of the 28th Annual Meeting of the Associa-
tion for Computational Linguistics, pages 243–251, Pittsburgh, Pennsylvania. Association for Computational
Linguistics.
De Rose, S. J. (1988). Grammatical category disambiguation by statistical optimization. Computational Linguis-
tics, 14(1):31–39.
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, 39(1):1–38.
EACL (1993). Proceedings of the Sixth Conference of the European Chapter of the Association for Computational
Linguistics, Utrecht University, The Netherlands. European Chapter of the Association for Computational
Linguistics.
Earley, J. C. (1968). An Efficient Context-Free Parsing Algorithm. PhD thesis, Computer Science Department,
Carnegie-Mellon University.
Earley, J. C. (1970). An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94–102.
Elworthy, D. (1993). Part-of-speech tagging and phrasal tagging. Technical report, University of Cambridge
Computer Laboratory, Cambridge, England.
Emele, M. and Zajac, R. (1990). Typed unification grammars. In Proceedings of the 28th Annual Meeting of the
Association for Computational Linguistics, Pittsburgh, Pennsylvania. Association for Computational Linguis-
tics.
Flickinger, D. (1987). Lexical Rules in the Hierarchical Lexicon. PhD thesis, Stanford University.
Fong, S. (1992). The computational implementation of principle-based parsers. In Berwick, R. C., Abney, S. P.,
and Tenny, C., editors, Principle-Based Parsing: Computation and Psycholinguistics, pages 65–82. Kluwer,
Dordrecht, The Netherlands.
Frege, G. (1892). Über sinn und bedeutung (translated as ‘on sense and reference’). In Geach and Black, editors,
Translations from the Philosophical Writings of Gottlob Frege. Blackwell, Oxford. translation 1960.
Fujisaki, T., Jelinek, F., Cocke, J., Black, E., and Nishino, T. (1989). A probabilistic parsing method for sentence
disambiguation. In Proceedings of the International Workshop on Parsing Technologies, Pittsburgh.
Gärdenfors, P. (1990). The dynamics of belief systems: Foundations vs. coherence theories. Revue Internationale
de Philosophie, 172:24–46.
Garside, R., Leech, G., and Sampson, G. (1987). Computational Analysis of English: A Corpus-based Approach.
Longman, London.
Gazdar, G. (1987). Linguistic applications of default inheritance mechanisms. In Whitelock, P. H., Somers, H.,
Bennet, P., Johnson, R., and Wood, M. M., editors, Linguistic Theory and Computer Applications, pages
37–68. Academic Press, London.
10 Chapter 3: Language Analysis and Understanding

Graham, S. L., Harrison, M. A., and Ruzzo, W. L. (1980). An improved context-free recognizer. ACM Transactions
on Programming Languages and Systems, 2(3):415–462.
Greene, B. B. and Rubin, G. M. (1971). Automatic grammatical tagging of English. Technical report, Brown
University.
Grice, H. P. (1975). Logic and conversation. In Cole, P., editor, Speech Acts, Syntax and Semantics, Vol III: Speech
Acts. Academic Press, New York.
Groenendijk, J. and Stokhof, M. (1984). On the semantics of questions and the pragmantics of answers. In
Landman, F. and Veltman, F., editors, Varieties of Formal Semantics, pages 143–170. Foris, Dordrecht.
Groenendijk, J. and Stokhof, M. (1990). Dynamic montague grammar. In Kalman, L. and Polos, L., editors, Papers
from the Second Symposium on Logic and Language, pages 3–48. Akademiai Kiadoo, Budapest.
Groenendijk, J. and Stokhof, M. (1991a). Dynamic predicate logic. Linguistics and Philosophy, 14:39–100.
Groenendijk, J. and Stokhof, M. (1991b). Two theories of dynamic semantics. In van Eijck, J., editor, Logics in
AI—European Workshop JELIA ’90, Springer Lecture Notes in Artificial Intelligence, pages 55–64. Springer-
Verlag, Berlin.
Haddock, J. N., Klein, E., and Morrill, G. (1987). Unification Categorial Grammar, Unification Grammar and
Parsing. University of Edinburgh.
Hindle, D. (1983). Deterministic parsing of syntactic nonfluencies. In Proceedings of the 21st Annual Meeting of
the Association for Computational Linguistics, pages 123–128, Cambridge, Massachusetts. Association for
Computational Linguistics.
Hindle, D. (1983). User manual for Fidditch, a deterministic parser. Technical Report Technical Memorandum
7590-142, Naval Research Laboratory.
Hindle, D. (1989). Acquiring disambiguation rules from text. In Proceeds of the 27th Annual Meeting of the
Association for Computational Linguistics, pages 118–125, Vancouver, Canada.
Hindle, D. (1990). Noun classification from predicate-argument structures. In Proceedings of the 28th Annual
Meeting of the Association for Computational Linguistics, pages 268–275, Pittsburgh, Pennsylvania. Associ-
ation for Computational Linguistics.
Hindle, D. (1992). An analogical parser for restricted domains. In Proceedings of the Fifth DARPA Speech
and Natural Language Workshop, pages 150–154. Defense Advanced Research Projects Agency, Morgan
Kaufmann.
Hindle, D. (1993). A parser for text corpora. In Atkins, B. T. S. and Zampolli, A., editors, Computational
Approaches to the Lexicon. Oxford University Press.
Hindle, D. and Rooth, M. (1991). Structural ambiguity and lexical relations. In Proceedings of the 29th Annual
Meeting of the Association for Computational Linguistics, pages 229–236, Berkeley, California. Association
for Computational Linguistics.
Hobbs, J. R., Appelt, D., Bear, J., Israel, D., Kameyama, M., and Tyson, M. (1993). FASTUS: a system for
extracting information from text. In Proceedings of the 1993 ARPA Human Language Technology Workshop,
pages 133–137, Princeton, New Jersey. Advanced Research Projects Agency, Morgan Kaufmann.
Hobbs, J. R., Stickel, M., Appelt, D., and Martin, P. (1993). Interpretation as abduction. Artificial Intelligence,
63(1-2):69–142.
3.6 Chapter References 11

Hudson, R. (1990). English Word Grammar. Blackwell, Oxford, England.

Jackson, E., Appelt, D., Bear, J., Moore, R., and Podlozny, A. (1991). A template matcher for robust natural-
language interpretation. In Proceedings of the Fourth DARPA Speech and Natural Language Workshop, pages
190–194, Pacific Grove, California. Defense Advanced Research Projects Agency, Morgan Kaufmann.

Jacobs, P. S. and Rau, L. F. (1993). Innovations in text interpretation. Artificial Intelligence, 63(1-2):143–191.

Järvinen, T. (1994). Annotating 200 million words. In Proceedings of the 15th International Conference on
Computational Linguistics, Kyoto, Japan.

Jelinek, F., Lafferty, J. D., and Mercer, R. L. (1990). Basic methods of probabilistic context free grammars.
Technical Report RC 16374 (72684), IBM, Yorktown Heights, NY 10598.

Jelinek, F., Mercer, R. L., and Roukos, S. (1992). Principles of lexical language modeling for speech recognition. In
Furui, S. and Sondhi, M. M., editors, Advances in Speech Signal Processing, pages 651–699. Marcel Dekker.

Jensen, K. (1991). A broad-coverage natural language analysis system. In Tomita, M., editor, Current Issues in
Parsing Technology. Kluwer Academic Press, Dordrecht.

Jensen, K. and Heidorn, G. (1993). Natural Langauage Processing: The PLNLP Approach. Kluwer Academic,
Boston, Dordrecht, London.

Jensen, K. and Heidorn, G. E. (1983). The fitted parse: 100% parsing capability in a syntactic grammar of English.
In Proceedings of the First Conference on Applied Natural Language Processing, pages 3–98.

Johnson, C. D. (1972). Formal Aspects of Phonological Description. Mouton, The Hague.

Johnson, M. (1992). Deductive parsing: The use of knowledge of language. In Berwick, R. C., Abney, S. P.,
and Tenny, C., editors, Principle-Based Parsing: Computation and Psycholinguistics, pages 39–64. Kluwer,
Dordrecht, The Netherlands.

Jones, B. (1994). Can punctuation help parsing? In Proceedings of the 15th International Conference on Compu-
tational Linguistics, Kyoto, Japan.

Joshi, A. K. (1985). How much context-sensitivity is necessary for characterizing structural descriptions—Tree
adjoining grammars. In Dowty, D., Karttunen, L., and Zwicky, A., editors, Natural Language Processing—
Theoretical, Computational and Psychological Perspectives. Cambridge University Press, New York.

Joshi, A. K., Levy, L. S., and Takahashi, M. (1975). Tree adjunct grammars. Journal of Computer and System
Sciences, 10(1).

Joshi, A. K. and Schabes, Y. (1992). Tree-adjoining grammars and lexicalized grammers. In Tree Automata and
LGS. Elsevier Science, Amsterdam.

Joshi, A. K., Vijay-Shanker, K., and Weir, D. J. (1991). The convergence of mildly context-sensitive grammati-
cal formalisms. In Sells, P., Shieber, S., and Wasow, T., editors, Foundational Issues in Natural Language
Processing. MIT Press.

Kamp, H. (1981). A theory of truth and semantic representation. In Groenendijk, J., Janssen, T., and Stokhof, M.,
editors, Formal Methods in the Study of Language. Mathematisch Centrum, Amsterdam.

Kamp, H. and Reyle, U. (1993). From Discourse to Logic. Kluwer, Dordrecht.


12 Chapter 3: Language Analysis and Understanding

Kaplan, R. M. and Bresnan, J. (1982). Lexical-functional grammar: a formal system for grammatical represen-
tation. In Bresnan, J., editor, The Mental Representation of Grammatical Relations. MIT Press, Cambridge,
Massachusetts.
Kaplan, R. M. and Kay, M. (1994). Regular models of phonological rule systems. Computational Linguistics,
20(3):331–378. written in 1980.
Karlgren, H., editor (1990). Proceedings of the 13th International Conference on Computational Linguistics,
Helsinki. ACL.
Karlsson, F., Voutilainen, A., Heikkilä, J., and Anttila, A., editors (1994). Constraint Grammar: A Language-
Independent Formalism for Parsing Unrestricted Text. Mouton de Gruyter, Berlin, New York.
Karttunen, L. (1989). Radical lexicalism. In Baltin, M. and Kroch, A., editors, Alternative Conceptions of Phrase
Structure. The University of Chicago Press, Chicago.
Karttunen, L. (1993). Finite-state lexicon compiler. Technical Report ISTL-NLTT-1993-04-02, Xerox PARC, Palo
Alto, California.
Karttunen, L. and Beesley, K. R. (1992). Two-level rule compiler. Technical Report ISTL-92-2, Xerox PARC, Palo
Alto, California.
Kay, M. (1979). Functional grammar. In Proceedings of the Fifth Annual Meeting of the Berkeley Linguistic
Society, pages 142–158.
Kay, M. (1984). Functional unification grammar: a formalism for machine translation. In Proceedings of the 10th
International Conference on Computational Linguistics, Stanford University, California. ACL.
Kay, M. (1986). Algorithm schemata and data structures in syntactic processing. In Grosz, B. J., Sparck Jones,
K., and Webber, B. L., editors, Readings in Natural Language Processing, chapter I. 4, pages 35–70. Morgan
Kaufmann Publishers, Inc., Los Altos, California. Originally published as a Xerox PARC technical report,
1980.
Kenny, P., Hollan, R., Gupta, V. N., Lenning, M., Mermelstein, P., and O’Shaughnessy, D. (1993). A -admissible
heuristics for rapid lexical access. IEEE Transactions on Speech and Audio Processing, 1(1):49–57.
Koskenniemi, K. (1983). Two-Level Morphology: a General Computational Model for Word-Form Recognition
and Production. PhD thesis, University of Helsinki. Publications of the Department of General Linguis-
tics,University of Helsinki, No. 11. Helsinki.
Koskenniemi, K. (1990). Finite-state parsing and disambiguation. In Karlgren, H., editor, Proceedings of the 13th
International Conference on Computational Linguistics, volume 2, pages 229–232, Helsinki. ACL.
Krieger, H.-U. and Schaefer, U. (1994). TDL—a type description language of HPSG. Technical report, Deutsches
Forschungszentrum für Künstliche Intelligenz GmbH, Saarbrücken, Germany.
Krifka, M. (1989). Nominal reference, temporal constitution and quantification in event semantics. In Bartsch,
R., van Benthem, J., and van Emde-Boas, P., editors, Semantics and Contextual Expressions, pages 75–115.
Foris, Dordrecht.
Kupiec, J. (1992). Robust part-of-speech tagging using a hidden Markov model. Computer Speech and Language,
6.
Kwasny, S. and Sonheimer, N. (1981). Relaxation techniques for parsing ill-formed input. American journal of
Computational Linguistics, 7(2):99–108.
3.6 Chapter References 13

Lafferty, J., Sleator, D., and Temperley, D. (1992). Grammatical trigrams: a probabilistic model of link grammar.
In Goldman, R., editor, AAAI Fall Symposium on Probabilistic Approaches to Natural Language Processing,
Cambridge, Massachusetts. AAAI Press.

Lambek, J. (1958). The mathematics of sentence structure. American Mathematical Monthly, 65:154–170.

Lang, B. (1974). Deterministic techniques for efficient non-deterministic parsers. In Loeckx, J., editor, Proceedings
of the 2nd Colloquium on Automata, Languages and Programming, pages 255–269, Saarbrücken, Germany.
Springer-Verlag.

Lang, B. (1989). A generative view of ill-formed input processing. In ATR Symposium on Basic Research for
Telephone Interpretation, Kyoto, Japan.

Lari, K. and Young, S. J. (1990). The estimation of stochastic context-free grammars using the Inside-Outside
algorithm. Computer Speech and Language Processing, 4:35–56.

Leech, G. and Garside, R. (1991). Running a grammar factory: the production of syntactically analysed corpora
or ‘treebanks’. In Johansson, S. and Stenstrom, A., editors, English Computer Corpora: Selected Papers and
Bibliography. Mouton de Gruyter, Berlin.

Leech, G., Garside, R., and Bryant, M. (1994). The large-scale grammatical tagging of text. In Oostdijk, N. and
de Haan, P., editors, Corpus-Based Research into Language, pages 47–63. Rodopi, Atlanta.

Levinson, S. C. (1983). Pragmatics. Cambridge University Press.

Lucchesi, C. L. and Kowaltowski, T. (1993). Applications of finite automata representing large vocabularies.
Software-Practice and Experience, 23(1):15–30.

Magerman, D. M. and Marcus, M. P. (1991). Pearl: A probabilistic chart parser. In Proceedings of the Fourth
DARPA Speech and Natural Language Workshop, Pacific Grove, California. Defense Advanced Research
Projects Agency, Morgan Kaufmann.

Magerman, D. M. and Weir, C. (1992). Efficiency, robustness and accuracy in Picky chart parsing. In Proceed-
ings of the 30th Annual Meeting of the Association for Computational Linguistics, University of Delaware.
Association for Computational Linguistics.

Marcus, M., Hindle, D., and Fleck, M. (1983). D-theory: talking about talking about trees. In Proceedings
of the 21st Annual Meeting of the Association for Computational Linguistics, pages 129–136, Cambridge,
Massachusetts. Association for Computational Linguistics.

Marcus, M. P. (1980). A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, Mas-
sachusetts.

Marshall, I. (1983). Choice of grammatical word-class without global syntactic analysis: tagging words in the
LOB corpus. Computers in the Humanities, 17:139–150.

Maxwell, John T., I. and Kaplan, R. M. (1989). An overview of disjunctive constraint satisfaction. In Tomita,
M., editor, Proceedings of the First International Workshop on Parsing Technology, Pittsburgh, Pennsylvania.
Carnegie-Mellon University.

McCord, M. C. (1980). Slot grammars. American journal of Computational Linguistics, 6(1):255–286.

McCord, M. C. (1989). Design of LMT: A Prolog-based machine translation system. Computational Linguistics,
15(1):33–52.
14 Chapter 3: Language Analysis and Understanding

McKeown, K., Kukich, K., and Shaw, J. (1994). Practical issues in automatic documentation generation. In Pro-
ceedings of the Fourth Conference on Applied Natural Language Processing, pages 7–14, Stuttgart, Germany.
ACL, Morgan Kaufmann.

McRoy, S. and Hirst, G. (1990). Race-based parsing and syntactic disambiguation. Cognitive Science, 14:313–353.

Mel’čuk, I. A. (1988). Dependency Syntax: Theory and Practice. State University of New York Press, Albany,
New York.

Montague, R. (1973). The proper treatment of quantification in ordinary English. In Hintikka, J., editor, Approaches
to Natural Language, pages 221–242. Reidel.

Moortgat, M. (1988). Categorial Investigations: Logical and Linguistic Aspects of the Lambek Calculus. PhD
thesis, University of Amsterdam, The Netherlands.

Murveit, H., Butzberger, J., Digilakis, V., and Weintraub, M. (1993). Large-vocabulary dictation using SRI’s DE-
CIPHER speech recognition system: Progressive search techniques. In Proceedings of the 1993 International
Conference on Acoustics, Speech, and Signal Processing, volume 2, pages 319–322, Minneapolis, Minnesota.
Institute of Electrical and Electronic Engineers.

Nguyen, L., Schwartz, R., Kubala, F., and Placeway, P. (1993). Search algorithms for software-only real-time
recognition with very large vocabularies. In Proceedings of the 1993 ARPA Human Language Technology
Workshop, pages 91–95, Princeton, New Jersey. Advanced Research Projects Agency, Morgan Kaufmann.

Nilsson, N. J. (1980). Principles of Artificial Intelligence. Tioga Publishing Company, Palo Alto, California.

Nunberg, G. (1990). The linguistics of punctuation. Technical Report Lecture Notes 18, CSLI, Stanford, California.

Nunberg, G. and Zaenen, A. (1992). Systematic polisemy in lexicology and lexicography. In Proceedings of
Eurolex 92, Tampere, Finland.

Oostdijk, N. (1991). Corpus Linguistics and the Automatic Analysis of English. Rodopi, Amsterdam, Atlanta.

Ostler, N. and Atkins, B. T. S. (1992). Predictable meaning shifts: Some linguistic properties of lexical implication
rules.

Partee, B. (1986). Noun phrase interpretation and type shifting principles. In Groenendijk, J. et al., editors,
Studies in Discourse Representation Theory and the Theory of Generalised Quantifiers, pages 115–144. Foris,
Dordrecht.

Paul, D. B. (1992). An efficient A stack decoder algorithm for continuous speech recognition with a stochastic
language model. In Proceedings of the 1992 International Conference on Acoustics, Speech, and Signal
Processing, volume 1, pages 25–28, San Francisco. Institute of Electrical and Electronic Engineers.

Pereira, F. C. N. (1985). A new characterization of attachment preferences. In Dowty, D. R., Karttunen, L., and
Zwicky, A. M., editors, Natural Language Parsing—Psychological, Computational and Theoretical perspec-
tives, pages 307–319. Cambridge University Press.

Pereira, F. C. N. and Schabes, Y. (1992). Inside-outside reestimation from partially bracketed corpora. In Proceed-
ings of the 30th Annual Meeting of the Association for Computational Linguistics, pages 128–135, University
of Delaware. Association for Computational Linguistics.

Petheroudakis, J. (1991). MORPHOGEN automatic generator of morphological information for base form reduc-
tion. Technical report, Executive Communication Systems ECS, Provo, Utah.
3.6 Chapter References 15

Pollard, C. and Sag, I. (1994). Head-driven Phrase Structure Grammar. Center for the Study of Language and
Information (CSLI) Lecture Notes. Stanford University Press and University of Chicago Press.
Pollard, C. and Sag, I. A. (1987). An Information-Based Approach to Syntax and Semantics: Fundamentals.
Number 13 in Center for the Study of Language and Information (CSLI) Lecture Notes. Stanford University
Press and Chicago University Press.
Prawitz, D. (1965). Natural Deduction: A Proof-Theoretical Study. Almqvist and Wiksell, Uppsala, Sweden.
Pritchett, B. (1988). Garden path phenomena and the grammatical basis of language processing. Language,
64(3):539–576.
Pustejovsky, J. (1991). The generative lexicon. Computational Linguistics, 17(4).
Pustejovsky, J. (1994). Linguistic constraints on type coercion. In St. Dizier, P. and Viegas, E., editors, Computa-
tional Lexical Semantics. Cambridge University Press.
Pustejovsky, J. and Boguraev, B. (1993). Lexical knowledge representation and natural language processing.
Artificial Intelligence, 63:193–223.
Revuz, D. (1991). Dictionnaires et lexiques, méthodes et algorithmes. PhD thesis, Université Paris, Paris.
Ritchie, G. D., Russell, G. J., Black, A. W., and Pulman, S. G. (1992). Computational Morphology. MIT Press,
Cambridge, Massachusetts.
Roche, E. (1993). Dictionary compression experiments. Technical Report IGM 93-5, Université de Marne la
Vallée, Noisy le Grand, France.
Sampson, G. (1994). Susanne: a doomsday book of English grammar. In Oostdijk, N. and de Haan, P., editors,
Corpus-based Linguistics: A Feschrift for Jan Aarts, pages 169–188. Rodopi, Amsterdam.
Sampson, G., Haigh, R., and Atwell, E. (1989). Natural language analysis by stochastic optimization: a progress
report on project APRIL. Journal of Experimental and Theoretical Artificial Intelligence, 1:271–287.
Sanfilippo, A. (1993). LKB encoding of lexical knowledge. In Briscoe, T., Copestake, A., and de Paiva, V., editors,
Default Inheritance within Unification-Based Approaches to the Lexicon. Cambridge University Press.
Sanfilippo, A. (1995). Lexical polymorphism and word disambiguation. In Working Notes of the AAAI Spring
Symposium on Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and Generativity.
Stanford University.
Sanfilippo, A., Benkerimi, K., and Dwehus, D. (1994). Virtual polysemy. In Proceedings of the 15th International
Conference on Computational Linguistics, Kyoto, Japan.
Schabes, Y. (1990). Mathematical and Computational Aspects of Lexicalized Grammars. PhD thesis, University
of Pennsylvania, Philadelphia. Also technical report (MS-CIS-90-48, LINC LAB179) from the Department
of Computer Science.
Schabes, Y. (1992). Stochastic lexicalized tree-adjoining grammars. In Proceedings of the 14th International
Conference on Computational Linguistics, Nantes, France. ACL.
Schabes, Y., Roth, M., and Osborne, R. (1993). Parsing the Wall Street Journal with the inside-outside algo-
rithm. In Proceedings of the Sixth Conference of the European Chapter of the Association for Computational
Linguistics, Utrecht University, The Netherlands. European Chapter of the Association for Computational
Linguistics.
16 Chapter 3: Language Analysis and Understanding

Schüller, G., Zierl, M., and Hausser, R. (1993). MAGIC. A tutorial in computational morphology. Technical report,
Friedrich-Alexander Universität, Erlangen, Germany.

Seneff, S. (1992). TINA: A natural language system for spoken language applications. Computational Linguistics,
18(1):61–86.

Sharman, R., Jelinek, F., and Mercer, R. L. (1990). Generating a grammar for statistical training. In Proceedings
of the Third DARPA Speech and Natural Language Workshop, pages 267–274, Hidden Valley, Pennsylvania.
Defense Advanced Research Projects Agency, Morgan Kaufmann.

Shieber, S. M. (1983). Sentence disambiguation by a shift-reduce parsing technique. In Proceedings of the 21st
Annual Meeting of the Association for Computational Linguistics, pages 113–118, Cambridge, Massachusetts.
Association for Computational Linguistics.

Shieber, S. M. (1992). Constraint-Based Grammar Formalisms. MIT Press, Cambridge, Massachusetts.

Shieber, S. M., Uszkoreit, H., Robinson, J., and Tyson, M. (1983). The formalism and Implementation of PATR-II.
SRI International, Menlo Park, California.

Sleator, D. and Temperley, D. (1991). Parsing English with a link grammar. Technical report CMU-CS-91-196,
Department of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania.

Sproat, R. (1992). Morphology and Computation. MIT Press, Cambridge, Massachusetts.

Stabler, Edward P., J. (1992). The Logical Approach to Syntax: Foundations, Specifications and Implementations
of Theories of Government and Binding. MIT Press, Cambridge, Massachusetts.

Stevenson, S. (1993). A competition-based explanation of syntactic attachment preferences and garden path phe-
nomena. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages
266–273, Ohio State University. Association for Computational Linguistics.

Stolcke, A. and Omohundro, S. (1993). Hidden Markov model induction by Bayesian model merging. In Hanson,
S. J., Cowan, J. D., and Giles, C. L., editors, Advances in Neural Information Processing Systems 5, pages
11–18. Morgan Kaufmann.

Teitelbaum, R. (1973). Context-free error analysis by evaluation of algebraic power series. In Proceedings of the
Fifth Annual ACM Symposium on Theory of Computing, pages 196–199, Austin, Texas.

Tomita, M. (1987). An efficient augmented context-free parsing algorithm. Computational Linguistics, 13(1):31–
46.

Touretzsky, D. S., Horty, J. F., and Thomason, R. M. (1987). A clash of intuitions: the current state of nonmono-
tonic multiple inheritance systems. In Proceedings of the 10th International Joint Conference on Artificial
Intelligence, pages 476–482, Milan, Italy. Morgan Kaufmann.

Turner, R. (1988). A theory of properties. The journal of Symbolic Logic, 54.

Turner, R. (1992). Properties, propositions and semantic theory. In Rosner, M. and Johnson, R., editors, Computa-
tional Linguistics and Formal Semantics. Cambridge University Press, Cambridge.

Tzoukermann, E. and Liberman, M. Y. (1990). A finite-state morphological processor for Spanish. In Karlgren,
H., editor, Proceedings of the 13th International Conference on Computational Linguistics, volume 3, pages
277–286, Helsinki. ACL.
3.6 Chapter References 17

Uszkoreit, H. (1986). Categorial unification grammars. In Proceedings of the 11th International Conference on
Computational Linguistics, Bonn. ACL.
van Eijck, J. and de Vries, F. J. (1992). A sound and complete calculus for update logic. In Dekker, P. and Stokhof,
M., editors, Proceedings of the Eighth Amsterdam Colloquium, pages 133–152, Amsterdam. ILLC.
Veltman, F. (1985). Logics for Conditionals. PhD thesis, University of Amsterdam, Amsterdam.
Voutilainen, A. (1994). Three Studies of Grammar-Based Surface Parsing of Unrestricted English Text. PhD thesis,
University of Helsinki, Department of General Linguistics, University of Helsinki.
Ward, W. (1991a). Evaluation of the CMU ATIS system. In Proceedings of the Fourth DARPA Speech and Natural
Language Workshop, pages 101–105, Pacific Grove, California. Defense Advanced Research Projects Agency,
Morgan Kaufmann.
Ward, W. (1991b). Understanding spontaneous speech: the Phoenix system. In Proceedings of the 1991 Interna-
tional Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 365–367, Toronto. Institute
of Electrical and Electronic Engineers.
Younger, D. H. (1967). Recognition and parsing of context-free languages in time  . Information and Control,
10(2):189–208.
Zeevat, H., Klein, E., and Calder, J. (1987). An introduction to unification categorial grammar. In Haddock, J. N.,
Klein, E., and Morrill, G., editors, Edinburgh Working Papers in Cognitive Science, volume 1: Categorial
Grammar, Unification Grammar, and Parsing, volume 1 of Working Papers in Cognitive Science. Centre for
Cognitive Science, University of Edinburgh.
18 Chapter 3: Language Analysis and Understanding

View publication stats

You might also like