Language and Equilibrium
5/5
()
About this ebook
In Language and Equilibrium, Prashant Parikh offers a new account of meaning for natural language. He argues that equilibrium, or balance among multiple interacting forces, is a key attribute of language and meaning and shows how to derive the meaning of an utterance from first principles by modeling it as a system of interdependent games.
His account results in a novel view of semantics and pragmatics and describes how both may be integrated with syntax. It considers many aspects of meaning—including literal meaning and implicature—and advances a detailed theory of definite descriptions as an application of the framework.
Language and Equilibrium is intended for a wide readership in the cognitive sciences, including philosophers, linguists, and artificial intelligence researchers as well as neuroscientists, psychologists, and economists interested in language and communication.
Prashant Parikh
Prashant Parikh does research in linguistics and philosophy and is involved in a natural language software business. He studied at MIT and Stanford University. This is his first novel. He lives in New York City.
Related to Language and Equilibrium
Related ebooks
Philosophical Semantics and Term Meaning Rating: 0 out of 5 stars0 ratingsThe Philosophy of Language: A Simple Guide to Big Ideas Rating: 0 out of 5 stars0 ratingsTruth, what is it good for?: Using the Power of Enlightenment to Improve Academic Performance Rating: 0 out of 5 stars0 ratingsThe Power Matrix: A Graphical Guide to History, Socialism, and the Left-Right Divide Rating: 0 out of 5 stars0 ratingsWittgenstein's Conception of Philosophy Rating: 0 out of 5 stars0 ratingsShadows of Thought Rating: 0 out of 5 stars0 ratingsA Brief History of Analytic Philosophy: From Russell to Rawls Rating: 4 out of 5 stars4/5The Science of God: The Twelve Principles of Perfection Rating: 0 out of 5 stars0 ratingsAfter Babel: Aspects of Language and Translation Rating: 4 out of 5 stars4/5The Goffman Lectures: Philosophical and Sociological Essays About the Writings of Erving Goffman Rating: 0 out of 5 stars0 ratingsThe Geography of Context Rating: 0 out of 5 stars0 ratingsThe Analysis of Mind Rating: 0 out of 5 stars0 ratingsComments on Roger Teichmann’s Article (2016) The Identity of a Word Rating: 0 out of 5 stars0 ratingsA Critical Understanding of Artificial Intelligence: A Phenomenological Foundation Rating: 0 out of 5 stars0 ratingsActualitas: Philosophy-Art for the 21st Century Rating: 0 out of 5 stars0 ratingsLogic: Fundamentals and Applications Rating: 0 out of 5 stars0 ratingsA Categorical Defense of Our Future Rating: 0 out of 5 stars0 ratingsTractatus Logico-Philosophicus Rating: 4 out of 5 stars4/5An Analysis of the Sources of Meaning in Life Rating: 0 out of 5 stars0 ratingsCulture and Cognition: The Boundaries of Literary and Scientific Inquiry Rating: 0 out of 5 stars0 ratingsTractatus Logico-Philosophicus (Rediscovered Books): Complete and Unabridged Rating: 4 out of 5 stars4/5Logic and Metaphysics Rating: 1 out of 5 stars1/5Folk Linguistics, Epistemology, and Language Theories Rating: 0 out of 5 stars0 ratingsPostmodernism 101: A First Course for the Curious Christian Rating: 4 out of 5 stars4/5What Is Fiction For?: Literary Humanism Restored Rating: 0 out of 5 stars0 ratingsThe Evolution of Language: A Simple Guide to Big Ideas Rating: 0 out of 5 stars0 ratingsHomo Donans: For a maternal economy Rating: 0 out of 5 stars0 ratingsThe End Of Philosophy: Tales Of Reality Rating: 0 out of 5 stars0 ratingsThe Science of Correct Thinking: Logic Rating: 5 out of 5 stars5/5
Linguistics For You
Fluent Forever (Revised Edition): How to Learn Any Language Fast and Never Forget It Rating: 4 out of 5 stars4/5Verbal Judo, Second Edition: The Gentle Art of Persuasion Rating: 4 out of 5 stars4/5Perfect English Grammar: The Indispensable Guide to Excellent Writing and Speaking Rating: 4 out of 5 stars4/5The Mother Tongue: English and How it Got that Way Rating: 4 out of 5 stars4/5Reading Like a Writer: A Guide for People Who Love Books and for Those Who Want to Write Them Rating: 4 out of 5 stars4/5The Well-Spoken Thesaurus: The Most Powerful Ways to Say Everyday Words and Phrases Rating: 5 out of 5 stars5/5Writing That Works, 3rd Edition: How to Communicate Effectively in Business Rating: 3 out of 5 stars3/5We Need to Talk: How to Have Conversations That Matter Rating: 4 out of 5 stars4/5A Pocket Dictionary of the Vulgar Tongue Rating: 0 out of 5 stars0 ratingsThe Elements of Style, Fourth Edition Rating: 5 out of 5 stars5/5Easy Learning Spanish Complete Grammar, Verbs and Vocabulary (3 books in 1): Trusted support for learning Rating: 5 out of 5 stars5/5500 Beautiful Words You Should Know Rating: 4 out of 5 stars4/5Metaphors Be With You: An A to Z Dictionary of History's Greatest Metaphorical Quotations Rating: 4 out of 5 stars4/5Word Magic: Born Again Rating: 0 out of 5 stars0 ratingsDrout's Quick and Easy Old English Rating: 5 out of 5 stars5/5Art of Styling Sentences Rating: 3 out of 5 stars3/5Everything Essential Russian Book Rating: 3 out of 5 stars3/5Dark Matter of the Mind: The Culturally Articulated Unconscious Rating: 4 out of 5 stars4/5Phonetics For Dummies Rating: 0 out of 5 stars0 ratingsCollins Rhyming Dictionary Rating: 1 out of 5 stars1/5Introducing Linguistics: A Graphic Guide Rating: 4 out of 5 stars4/5Dictionary of Word Origins Rating: 4 out of 5 stars4/5500 Words You Should Know Rating: 4 out of 5 stars4/5The Elements of Eloquence: Secrets of the Perfect Turn of Phrase Rating: 0 out of 5 stars0 ratingsReader, Come Home: The Reading Brain in a Digital World Rating: 4 out of 5 stars4/5On Language: Chomsky's Classic Works: Language and Responsibility and Reflections on Language Rating: 4 out of 5 stars4/5What Kind of Creatures Are We? Rating: 4 out of 5 stars4/5Hair of the Dog to Paint the Town Red: The Curious Origins of Everyday Sayings and Fun Phrases Rating: 3 out of 5 stars3/5
Reviews for Language and Equilibrium
1 rating1 review
- Rating: 5 out of 5 stars5/5
Nov 25, 2024
Download Full Ebook Very Detail Here :
https://amzn.to/3XOf46C
- You Can See Full Book/ebook Offline Any Time
- You Can Read All Important Knowledge Here
- You Can Become A Master In Your Business
Book preview
Language and Equilibrium - Prashant Parikh
1
Introduction
I learned very early the difference between knowing the name of something and knowing something.
—Richard Feynman, The Making of a Scientist
In this book, I present a new account of meaning for natural language.
The account has three levels. Most concretely, it offers a tool to derive and compute the meanings of all possible utterances, at least in principle. More generally, it provides a method to produce variant theories of meaning and to address the many problems and puzzles that beset its study. Most abstractly, it advances a way to think about meaning and language through the lens of a broad and powerful idea and image.
At the first level the account is a theory, at the second a framework, and at the third a paradigm.
The paradigm embodies the leading idea and image of equilibrium—or balance among multiple interacting forces. The framework draws primarily upon game theory and situation theory. These are the best tools available at present to implement the idea of equilibrium in our context of language and meaning. The theory uses the constraints that arise from game theory and situation theory to capture the meanings of utterances. This renders their derivation a more or less straightforward computational task.
The resulting account is called equilibrium semantics.
1.1 Brief Background
Although the study of meaning goes back to classical times in multiple cultures,¹ there have been two broad traditions in the philosophy of language that have addressed meaning in the twentieth century. One is the ideal language tradition and the other is the ordinary language tradition.² Frege, Russell, Whitehead, and the early Wittgenstein were among the first contributors to the former, and the later Wittgenstein, Austin, Grice, and Strawson were among the first contributors to the latter. In the second half of the twentieth century, both traditions have borrowed a great deal from each other and have partly even merged, albeit uneasily.
From all the details of both traditions, it is possible to extract two central ideas, one from each tradition. The first tradition has contributed the idea of reference, of language’s being about the world (or about extralinguistic entities in particular); the second tradition has contributed the idea of use or communicative activity in a broad sense. The first tradition tried to understand reference or the aboutness of language; the second tradition tried to understand use or the communicative function of language. Both ideas, incidentally, are nicely captured in the happy phrase the flow of information, a composite idea that underlies and undergirds the account of meaning in this book.
Ideal language philosophy originated in the study of mathematics and the logic of mathematical statements, which led to its emphasis on the idea of reference. In the main, it did not see mathematics as a situated activity, partly on account of its abstract and formal nature. This led to its ignoring the dimension of use and to its focus on formal logic and especially on translating natural language utterances into logical languages.³ In fact, in the early days, the often inconvenient facts of use were treated as a kind of defect that would be removed by idealizing language. While this tradition has yielded many insights of continuing relevance including, crucially, its use of mathematical methods—expressed perhaps most of all in what is called formal semantics after Montague (1974b)—its attempt to extend its ideas from mathematical languages to natural languages has led to many difficulties as elaborated in section 1.4.⁴
The practitioners of ordinary language philosophy, reacting to this artificial and ersatz⁵ view of natural language, started from the vision that natural language was an inherently situated activity and that the use of language in communication should be at the heart of its study. Unfortunately, they also appeared to believe in the main that this attribute of natural language made formal methods relatively inapplicable. This tradition too has afforded many insights into the nature of linguistic meaning, although its lack of a mathematical approach has often made its arguments imprecise and vague.
The awkward state of affairs that exists at present can be best seen in there being the two distinct disciplines of semantics and pragmatics, each concerned with meaning, one primarily with its referential aspect and largely formal and conventional, the other primarily with its use-related or communicative aspect and largely informal and contextual.⁶ While both disciplines have drawn upon each other and have developed a great deal since their originating ideas were established, in the mainstream view, semantic meaning is still generally identified with a sentence’s conventionally given and formally represented truth-conditions, and pragmatic meaning is generally identified with some combination of contextually inferred and informally represented Gricean implicature and Austinian illocutionary force.⁷ These two types of meaning typically coexist and may coincide or diverge in ways best exemplified perhaps in Grice (1975, 1989) and Kripke (1977).
Kaplan (1969), building on Montague’s index semantics and Lewis’s (1972) contextual coordinates, introduced context into semantics proper via his two-dimensional notion of character but this was intended for just a limited set of expressions, primarily tense and pronouns. Stalnaker (1970, 1978, 1998, 1999b), noting the ubiquity of context-sensitive expressions in language, generalized the concept of a context from an index to a context set—an entire body of information
—for literal meaning, which is essentially the underlying idea of context that is prevalent today, including roughly the one used in this book, except that Stalnaker’s notion is couched in the framework of possible worlds. The question of exactly how context obtruded into the sphere of semantics, whether conventionally or inferentially, or a combination of both, was largely unaddressed.
Barwise and Perry (1983) tried to reorganize these insights in a radical rather than moderate way by extending model-theoretic or referential methods to accommodate Austin’s focus on use by inventing the key idea of a situation, something Austin had used informally⁸ and something that captures Stalnaker’s idea of a body of information directly rather than circuitously via possible worlds.⁹ Among other advances, this led to a blurring of the boundaries between the two traditions and between semantics and pragmatics, although not in any very precise way. One problem was that situation semantics, as their account was called, involved an overly abstract and impoverished notion of use, as did the earlier efforts by Montague, Lewis, Kaplan, and Stalnaker¹⁰: they had no theory of use, just some notational stand-ins
for broad aspects of use.¹¹ Nevertheless, I have found Barwise’s development of a theory of information—called situation theory—as well as a few aspects of their attempt to combine the two traditions to be of great value in developing my own approach to language and meaning. Part of the reason for this is the conviction that if things are done correctly there ought to be just one unified theory of meaning rather than two uneasily juxtaposed accounts. This Hegelian aufhebung of the two traditions and the two disciplines is what will be attempted in Language and Equilibrium.¹²
The erosion of the barrier separating semantics from pragmatics has been underway from other quarters as well. Recanati (2004b, 2004c) as well as the Relevance Theorists (see Sperber and Wilson 1986a, 1986b, 2004; and Carston 2004 among others) have also been chipping away at this distinction (most strikingly with examples of so-called free enrichment¹³) and offering a more imbricated picture of meaning. The view that many linguistic phenomena that were previously seen as belonging to semantics in fact belong to pragmatics has come to be called radical pragmatics though, of course, in my view, these are all part of a radical semantics that I have chosen to call equilibrium semantics.
Typically, following Charles Morris’s (1938) original trichotomy syntax–semantics–pragmatics, semantics is identified with what comes from the linguistic representation or with the conventional meaning of the representation and pragmatics is identified with the contributions of the ambient circumstances. Linguists especially use the term underspecification to describe this—semantics first underspecifies content that is later filled in by pragmatics. It is better to identify semantics with the problem of inferring the entire content,¹⁴ regardless of what contributes to this content, the linguistic representation or the context. Indeed, it has often been assumed in the past that conventions suffice for getting at content so there is an ambiguity in the original identification of semantics with convention since it was implicit that convention would yield content. This perhaps explains the origin of the term literal content. That is, it is not clear whether the commitment should be to the literal, purportedly conventional source of content or to content per se. The mainstream view¹⁵ of semantics has identified with the former, but I am urging the latter, especially since even literal content is ineluctably contextual.
A major advantage of the identification of semantics with the determination of content rather than with convention is that it allows a uniform view of all representations and symbols, whether they are linguistic, or belong to other modes such as the visual or gestural, or whether they are mental representations. The uniform view is that content of any kind is a function of two variables, the representation φ and its embedding ambience u. That is, the content can be written as C(φ, u), where φ stands for any representation, whether it is linguistic, visual, gestural, or mental. Indeed, φ can stand for any sign as well, including tree rings, footprints, or black clouds.
Secondly, this view of a single discipline for meaning prevents an artificial division into two subfields—semantics and pragmatics—of all the factors that should jointly contribute a unified theory of content. The former view takes the representations themselves as primary and more or less exclusive (the first variable in C(φ, u)) and as the starting point for scientific inquiry, the latter view takes the flow of information and communication and thought as primary (C itself ) and as the starting point for scientific inquiry. If the field of language and meaning is seen as falling within the larger domain of information flow with human behavior as one central component of it, then we ought to be more inclined to the second view. Language then becomes just a part of all that makes meaning possible. As Dretske (1981) writes, In the beginning there was information. The word came later.
An exclusive focus on language takes hold of the wrong end of the stick and makes grammar primary, and meaning secondary and an afterthought. This leads to a parallel exclusion of context by focusing on semantic meaning
(meaning derived almost entirely from the linguistic representation) as primary and pragmatic meaning
(meaning arising from contextual factors) as secondary. Restoring the centrality of information and its flow enables a balanced view of the sources of meaning as such. And, as will be seen later in this chapter as well as throughout the book, the subject matter, including even syntax,¹⁶ is best viewed not as a linear stick
but as a circle instead.
Third, Austin (1975, 1979b) offered a critique of the semantics– pragmatics distinction that appears to have been largely ignored. His dialectical argument started by making a persuasive case that the meaning of at least some utterances is not a matter of truth conditions alone. While assertions require truth conditions, performative utterances require felicity conditions. Semantics would then be concerned with truth conditions and pragmatics with felicity conditions. But this argument places us on a slippery slope. Austin argued that truth conditions are themselves just part of the felicity conditions for uttering a sentence. This suggests that semantics is really a part of pragmatics or, to put the thesis in its most radical form, that there is no principled distinction between semantics and pragmatics. If illocutionary force is taken as an aspect of the content of an utterance, then once again this leads to a unified view of semantics and pragmatics.
Of course, to be convincing, the viewpoint being advanced requires a homogeneous framework that actually enables a uniform derivation of the full content of an utterance. I show it is possible to create such a framework from first principles.¹⁷
An added advantage to offering a comprehensive and detailed mathematical framework for meaning is that many arguments offered today for or against a theory of particular phenomena remain inconclusive because their proponents often presuppose different views of semantics and pragmatics and also give nonuniform accounts for different classes of phenomena. For example, in the fascinating arguments over the last century for or against Russell’s (1905) theory of definite descriptions, different theorists often assume different and incompatible positions on the notion of meaning itself and then advance a very particular theory of definite descriptions that may be at odds with theories of other, even adjacent, phenomena.¹⁸ Such arguments may be seen as offering a perspective at two levels simultaneously, both an implicit argument for an idea of meaning and an explicit one for a particular theory. In contrast, this book provides a uniform approach to the derivation of the full contents of more or less all utterances, couched within an explicit and unified framework for meaning that synthesizes semantics and pragmatics. This does not obviate the need for particular theories but it makes these accounts reasonably uniform across phenomena.
Besides combining the central ideas of reference and use stemming from ideal language and ordinary language philosophy, I also depart from both traditions in fundamental ways. I see content as indeterminate in a number of specific ways, a facet of meaning that has not been seriously addressed before. Finally, of course, I introduce the idea of equilibrium to explain its many aspects, both traditional and new, in a manner that unifies them and provides a single idea and image of the system of language and meaning.
Thus, in a simplified and abstract way, it would be accurate to say that equilibrium semantics, the account of meaning presented here, combines four distinct ideas in a single unified framework: reference, use, indeterminacy, and equilibrium.
1.2 The Origins of Symbols
Observational cosmology suggests that the universe is roughly fourteen billion years old. By contrast, it appears that a little over sixty thousand years ago the human race broke into a symbolic consciousness, a new kind of consciousness that allowed it for the first time to use objects and events to represent other objects and events. The experience of death or the experience of play may have been among the first events that triggered this fateful break with our prehuman
past. Until then, presumably, man¹⁹ was submerged in a kind of presymbolic awareness that allowed him just direct
perception and direct
actions and interactions in the world, not unlike the condition of other animals. See Terrence Deacon (1997) for one account of what is involved in man’s achievement of a symbolic consciousness.²⁰
It is difficult for us to imagine this presymbolic state because this new cognitive ability must have suddenly transformed the universe into a miraculously different place, one with myriad individuals, properties, and relations. Of course, these entities had been there all along, and had been perceived, reasoned about, and acted upon in direct and unmediated ways, but their richer existence as we know it today required the ability to name them, to form discrete mental and public symbols corresponding to them, to pluck them out of the relatively undifferentiated store of the world. Overnight, the world must have become a repository of information, a novel ontological space in place of the old continuum. It seems reasonable to surmise that it was this fresh and unfamiliar power to represent the world to ourselves and communicate it to others, especially through language, that made us human.²¹
In this book, I take as my starting point this informational space of individuals, properties, relations, and other entities, and study how we use language to talk about the world and do things in the world.
I now describe in broad terms my conception of meaning and language and how they fit into the larger scheme of things.
1.3 Information and Meaning
The breakthrough transformation described above allowed reality, that is, the world, to be construed as a space of entities. This space is what we call information. It contains individuals, properties, and relations; it also contains entities involving individuals having properties and standing in relations as well as collections of such states of affairs.
Intuitively and epistemologically, it is perhaps such collections that people first learn to discriminate and identify, chunks and slices of reality called situations. It is from situations, from these parts of the world that agents may observe or find themselves in, that they isolate individuals standing in relations.
An ancestor who had emerged into a symbolic awareness of the world may have noticed footprints in the snow or may have found himself fashioning a tool: both are situations the ancestor encountered and identified. Equally, modern man may read a report on a company or find himself in a restaurant: again, both circumstances are situations in our special sense of the term.
Beyond these rudimentary individuations, the ancestor may have realized that such footprints meant that a bear had passed by recently or that the hardness of the piece of stone he was using to fashion a tool meant that he could use it to chip away at various rocks. Similarly, the modern man in question may also have drawn the conclusion that the company report meant that its stock price was about to rise or that his being in a restaurant meant that he could order some food. Such observations point to another type of basic entity in our informational space: links between situations that allow one situation to carry information about another. This kind of link, called a constraint, is the essence of meaning. Put metaphorically, meaning is the flow of information.²²
Smoke means fire, black clouds mean rain, footprints in the snow mean that a bear has passed by there recently. The natural presymbolic order that exists prior to man is full of constraints that enable one part of the world to be systematically linked to another and to carry information about another.²³
Of course, if there is no one around to observe these natural regularities, they remain undiscovered and unexploited. But it was an essential part of man’s survival that he was able to register these constraints and choose his actions on their basis. Many such causal constraints were instinctively recognized by presymbolic man and even other animals and lower forms, but it was the ability to mentally represent and manipulate such systematic links and communicate them to others that enabled Homo sapiens to succeed so spectacularly.
Thus, the informational universe contains not just individuals standing in relations and situations but also constraints. My theory of this space is called situation theory, first invented by Barwise (1989b), who was influenced by Dretske’s (1981) account of information flow, and who in turn was inspired by the classic theory of information transmission developed by Shannon (1949). The version of situation theory presented in this book is very much my own, though it draws a great deal from Barwise and Perry (1983) and Barwise (1989b).
Another way to describe the causal links that are part of the natural order is to say that smoke is a sign of fire, black clouds a sign of rain, and footprints a sign of a bear’s presence. The term sign
will be used to refer to constraints that do not involve human agency in a basic way. Once man broke into a symbolic consciousness, a new type of entity arose that I will call a symbol. Symbols are artificial constructs that involve human intention and agency in a basic way and that are at least partly social. Our modern man’s company report is a collection of symbols. Symbols are organized in systematic ways and such structures are called symbol systems. The system of traffic lights is one example of a symbol system, but the major symbol systems are those of language. Once again, see Terrence Deacon (1997) for one account of the distinction.
The object of a theory of meaning should be symbols and symbol systems and how they are used by agents to bring about a flow of information. For an entity to be a symbol, it must stand for, be about, refer to, or represent some other entity in the world and this relation must owe its existence ultimately to human intention and agency. Both signs and symbols involve aboutness,
but the requirement of human intention and agency is what distinguishes symbols from signs. The relation of a symbol to its referent is the relation of reference or representation.²⁴ This relation can be expressed as a constraint between two situations and thus enables one situation to carry information about another, the hallmark of meaning. A red light means you have to stop and a green light means you can go. The universe of symbols and symbol systems is very wide because it includes not only verbal languages, but also images, gestures, and other symbol systems. Peirce (1867–1913/1991) and Saussure (1916/1972) were perhaps the first figures to build explicit semiotic theories of this generality, but this kind of attempt has since fragmented into the separate study of each symbol system with little underlying unity.²⁵
While the relation of reference or representation may be said to be the central aspect of meaning, it is also in some ways its most obvious attribute.²⁶ A less obvious aspect, indeed one that still eludes many, is the equally central relation of use. This is the aspect connected with the requirement of human intention and agency. It would perhaps not be an exaggeration to say that the subtlety of the relation of use is what makes semantics (or what many call pragmatics today) difficult. This is not to diminish the great strides that made referential semantics possible, but once the basic ideas of Tarskian model theory were in place, the rest has been a matter of working out details, however innovative they may be. A similar revolution has yet to occur in the domain of use, although part of the difficulty is that its subtlety makes many philosophers and linguists deny its importance and sometimes even its existence. The main reason for this skepticism is that there is as yet no systematic theory and mathematical apparatus to model use; this is a lacuna I hope to fill in this book in a compelling way.
In addition to the two central aspects of reference and use, a third equally fundamental attribute of natural language and many other symbol systems is the indeterminacy of meaning. Except for the relatively copious literature on vagueness, this property has also remained largely unexplored in its other dimensions. It is primarily this attribute of meaning that has allowed writers such as Derrida (1988) to make some amazingly outlandish claims about language and meaning. But if approached systematically and mathematically, this vital but amorphous attribute becomes easier to grasp and allows one to understand some rather commonsensical facts about language that have been ignored by many. It also makes clear why, along with the relation of use, this property of meaning is responsible for many of the difficulties faced by computational linguists.
Finally, the fourth entirely new feature of natural language that appears to have gone almost completely unnoticed is that of equilibrium. While Lewis (1969) was a precursor, Parikh (1987b, 2001) may have been the first to bring this aspect squarely into the realm of meaning. The generative idea in philosophy, linguistics, and artificial intelligence, the idea of starting with a stock of simple objects and combining them according to formal rules to derive more complex objects, was enormously fruitful, but perhaps too much has been attempted with this single idea. What semantics (and language more widely) needs in addition is the equally powerful idea of equilibrium. Essentially, equilibrium allows one to consider the interactions of objects at multiple levels, something that generativity precludes. Earlier, I considered Language and Interaction for the title of this book. There was a deliberate ambiguity in this title: interaction
was meant to refer not only to the interactions between agents involved in the flow of information, but also to the interactions among various entities at multiple levels in the system of language and meaning.
Equilibrium semantics rests on the four fundamental ideas of reference, use, indeterminacy, and equilibrium because these features inhere in meaning; they are not imposed on it by the framework. The account I construct is like any other empirical theory in the sciences; in addition, a science of meaning is a social science.²⁷
Situated games of partial information play a central role in capturing all four of these ideas in a unified mathematical framework.
1.4 Language
The focus in this book is on language although the methods developed will also be applicable to other symbol systems. Language is possibly our most sophisticated symbol system and is certainly the most intricately structured. Meaning is also almost completely social: the relation between a word and its referent is in the main not fixed by a natural
relation such as resemblance. The relation is, in a specific sense, arbitrary. Table
could have meant chair
and vice versa if English had evolved differently.²⁸
I now discuss the four key ideas introduced above in some more detail.
1.4.1 Reference
The concept of reference came to be better appreciated and more precisely understood in modern times. Since this happened via the work of logicians such as Frege, Russell, and Tarski working with formal languages, and since these methods were then extended to natural languages, it seems best to start with a long quote from one of the more elegant modern texts on formal semantics by L. T. F. Gamut (1991a, 1991b).
The semantics of standard logic can be seen as a referential theory of meaning (and thus as a correspondence theory of meaning). When defining a model for predicate logic, the first thing we do is choose some set of entities²⁹ as our domain. The set is independent of the expressions which collectively form a predicate-logical language. We then specify a relation between the predicate-logical language in question and the domain. By means of an interpretation function, the constant symbols are assigned individual domain elements, and the predicate symbols are assigned sets of domain elements (or sets of ordered sequences of n domain elements in the case of n-ary predicate letters) as their references. With this as a basis, we are in a position to define the reference relative to this model of all sentences in our language (that is, their truth-values), in the so-called truth definition.
The semantics of predicate logic is indifferent to the kinds of things we choose to put in the domains of our models. And whatever the domain may be, the theory of meaning is always a referential one: the meanings of the symbols are always their references.
One important characteristic of the semantic interpretation process, a characteristic which also happens to be shared by the nonstandard systems we shall meet up with, is that a strict parallelism is maintained between the syntactic constructions and their semantic interpretations. The truth definition mirrors the syntactic definition of the formulas of the language in question. There is a methodological consideration underlying this practice, one which can be traced back to Frege. This German logician and mathematician gave the first satisfactory analysis of sentences with relational predicates and multiple quantification in 1879, in his Begriffsschrift. Now the fundamental insight behind his solution to these age-old problems is that every sentence, no matter how complex, is the result of a systematic syntactic construction process which builds it up step by step, and in which every step can receive a semantic interpretation. This is the well-known principle of semantic compositionality.
This extract explains clearly how reference is conceptualized and set up for formal languages. The framework of formal semantics for natural language has largely taken over this conceptualization and added to it more complex entities to handle the more complex devices of natural language. Montague Grammar and its derivatives such as Discourse Representation Theory³⁰ represent in some sense the pinnacle of this approach to meaning.
But some aspects of the underlying conceptualization that formal semantics shares with the semantics of predicate logic are problematic for natural language for the following reasons:
• Restriction of the domain to individuals
• Holism of truth values
• Reference as assignment
• Compositionality
• Extensionality and intensionality
As we have already seen in section 1.3, and will see in greater detail in the next chapter, there are a plurality of entities in the informational space. It is this significantly richer space that will be seen to be required for the semantics of natural language because natural language is much richer than (first-order) formal languages. Individuals and sets of (sequences of) individuals may be adequate for predicate logic but they are far too impoverished to handle the complexities of natural language. Some of these inadequacies have been addressed in formal semantics by bringing in properties and relations as entities in their own right (as opposed to modeling them as sets), but what is needed is a thoroughgoing revision of the ontology. This is provided by situation theory, both in its original form and especially in the version presented here.
The second assumption of the holism of truth values is a particular instance of the previous assumption. Instead of supplying appropriate structured entities to play the role of the contents of utterances, formal semantics and philosophy have continued to deal with truth values as their (referential) meanings.
Barwise and Perry (1975, 1983) have criticized this holism in very thorough ways and there is little point in repeating this criticism here. Unfortunately, situation theory and situation semantics have fallen out of favor³¹ and so their solution to the problems posed by this holism have been largely ignored.
Third, reference has been treated simply as assignment, a move that is perfectly legitimate for formal languages, but leaves much to be desired for natural languages. This is one reason for the split in the study of meaning: formal semanticists have contented themselves with simply addressing the problem of representing meanings and have left the messy facts of use that lie at the core of reference to pragmatics and the philosophy of language. Unfortunately, by and large, these latter disciplines have simply replaced assignment by convention, that is, (literal) contents are generally taken as conventionally given. What is required is both an adequate ontology and an account of reference in its full complexity that meshes with these representations. I attempt to do this via a combination of game theory and situation theory that allows one to actually construct a formal definition of reference.³²
The fourth shared assumption is Frege’s venerable principle of compositionality.³³ Again, this is perfectly valid for formal languages because it is always assumed that such languages are perfectly precise and unambiguous. But natural languages are notoriously ambiguous and vague and then Fregean compositionality breaks down. This is because the meaning of one word (and phrase) in an utterance of a sentence can affect and be affected by the meanings of the other words (and phrases) in the sentence.³⁴ When no ambiguity or vagueness are present, these interdependencies and interactions of meaning are otiose and superfluous. But when they are present, as they almost always are in natural language, the simple generative idea of Fregean compositionality falls short. As I said earlier, generativity prohibits the interactions of various objects. Equilibrium semantics offers a generalization of the Fregean principle of compositionality, called the fixed point principle or fixed point compositionality, that is able to accommodate these pervasive attributes of natural language; when they are absent, the fixed point principle reduces to the special case of Fregean compositionality.
Finally, Frege’s (1980) classic paper appeared to make it clear that reference could not be direct, that there had to be some intervening layer such as that of his sense.
These issues have been hotly contested after Kripke’s (1972/1980) dramatic work on direct reference for a subclass of words. I will side with Frege and offer a picture of word meaning that is a generalization and refinement of the traditional observation that the ordinary word meaning
is ambiguous and needs to be split into two parts that have variously been called connotation and denotation, sense and reference, and intension and extension. I call these two tiers conventional meaning and referential meaning.³⁵ In this book, I do not offer any argument for this position, as it would mean a long detour and require addressing a large literature including, most prominently, Kripke (1972/1980) himself. I hope to do this on another occasion but some inkling of my views may be gleaned from chapter 6 on noun phrases where I briefly address proper names and where I offer detailed counterarguments to Kripke’s (1977) critique of Donnellan (1966) based on my theory of definite descriptions. In any case, I hope the theory will be immune to the kinds of criticisms Kripke (1972/1980) and Putnam (1975) (among many others) have made. A somewhat surprising consequence of my account is that while every word in a natural language has at least one conventional meaning, phrases and sentences have no conventional meanings; on the other hand, when used in an utterance, words, phrases, and sentences all receive referential meanings or contents. Each word in an utterance of a sentence such as The waiter is rude
has at least one conventional meaning, but the various phrases and the entire sentence have no conventional meaning. On the other hand, all words, phrases, and sentences acquire contents when uttered. This again is a failure of compositionality at the level of sense or intension or conventional meaning.
Despite these departures from the many assumptions shared by the standard semantics of formal and natural languages, equilibrium semantics does share its foundational assumption that language requires a referential theory of meaning. Indeed, my account requires that every word, phrase, and sentence in an utterance have a reference, even apparently syncategorematic words such as THE and OR.
1.4.2 Use
There are at least two reasons why the concept of use is subtle and has resisted analysis. One is that it involves a number of other concepts that are often poorly understood. The other is that it is difficult to develop a mathematical apparatus that can accommodate all these concepts and that does justice to interactions between agents. With formal languages, it is possible to abstract from use and pretend we are dealing just with inert symbols rather than with their use. With natural languages, this becomes impossible as we will see below. Those who have tried to ignore the relation of use as central to semantics have had to resort to many awkward contortions such as positing all kinds of entities at multiple layers of sentential representation, whose connection with empirical reality becomes increasingly tenuous and ad hoc, reminiscent of the epicycles of pre-Copernican astronomy.³⁶
Minimally, the following concepts are intimately related to, if not included in, the concept of use:
• Belief, desire, intention, and agency
• Sentence and utterance
• The situatedness of language
• The efficiency of language
• Ambiguity
• Communication and information flow
It is astonishing that mainstream linguistics in the twenty-first century has no theoretically grounded conception of agency.³⁷ Whatever our innate endowment may be, language (its dimension of meaning in particular) is surely a social institution and as such supervenes on use and human agency. Ever since Grice, the philosophy of language has had recourse to the concept of rational agency, but it has remained informal. The only framework today that has an apparatus with a mathematically formulated and philosophically sound conception of agency is that of game and decision theory. This conception has undergone exciting changes since the work of Tversky and Kahneman (Kahneman, Slovic, and Tversky 1982) and is still evolving. In this sense, my earlier work (Parikh 1987b, 1992, 2000, 2001; Fehling and Parikh 1991) and others’ subsequent contributions (e.g., the volume edited by Benz et al. 2006 and the volume edited by Pietarinen 2007) to the now burgeoning field of game-theoretic semantics and pragmatics have been the only approaches that involve the concept of agency in a full-blooded way. Belief, desire, and intention are integral to action and it is the singular virtue of decision and game theory that they offer a way to integrate these component factors that result in action. Any systematic approach to questions of use must draw upon a theory of action and especially interaction that includes its constituents of belief, desire, and intention. Indeed, we will see in the paragraphs that follow that all the other elements of use listed above can be addressed adequately only because we have recourse to game theory and situation theory.
Perhaps equally astonishing is the insistence of many philosophers and linguists on dealing with sentences rather than utterances despite the contributions of ordinary language philosophy. A sign or symbol seldom carries information by itself. It is only when we take account of the circumstances in which the sign or symbol occurs that we can infer a referential meaning.³⁸ Likewise, a sentence by itself does not have a meaning. It is only an utterance, an act involving the production of a sentence (or other symbol) in some situation, that carries information. While words and sentences appear to mean things in the abstract, only a moment’s reflection is required to see that a name such as HARVEY or common noun such as BANK can only carry their conventional meanings³⁹ when abstracted from their circumstances of use. Without an embedding utterance, a name cannot possibly refer (which one of countless Harveys is being referred to?) nor can a noun, verb, preposition, or article.⁴⁰ Identifying utterances with sentences may be permissible only for formal languages. The circumstances of utterance simply cannot be ignored in the case of natural language. Put differently, a sentence and its component parts cannot ever connect with reality, cannot ever be about anything without being ensconced in an utterance. I suspect the reason for the reluctance to deal squarely with utterances is that, as hinted above, there simply appears to be no mathematical or otherwise solid apparatus to deal with the messiness and unruliness of contexts. Sentences are well-behaved, rule-governed objects and so we feel more comfortable with them and have ways of manipulating them. I hope that equilibrium semantics, with its use of game theory and situation theory, will dispel these doubts.
I have already referred to the situatedness of language when talking about contexts and circumstances, just alternative words for what will technically be called the utterance situation. Indeed, agents are always in situations of one sort of another, and not just our utterances, but all our actions as well as their constituents—beliefs, desires, and intentions—are situated. In the case of utterances, as with all actions, this situatedness implies that meaning is a result of both the sentence uttered and the utterance situation. In fact, as already mentioned earlier, we will write C(φ, u) = Cu(φ) for the content of a sentence φ uttered in situation u. The context makes many contributions to the meaning of an utterance in general unlike the case of formal languages where one can effectively write Cu(φ) ≡ C(φ) for all situations u. This situatedness occurs more widely than just with natural language. When someone waves his hand to extend a greeting, there is a situation in which he performs the action and thereby conveys a greeting. In a different situation, the same action could have meant a goodbye.
Intimately related to this situatedness is the efficiency of language. The fact that the same sentence can be used to convey different contents in