PHIL 145 Lecture Notes
PHIL 145 Lecture Notes
Lecture Notes
Table of Contents
Page
Lecture Notes
Lecture 1
Along the way, I'll say a bit about how I've chosen to organize this particular course and why.
Most philosophy departments offer a course called Critical Thinking, or Informal Logic,
or sometimes even Argumentation Theory. While some people insist that each of these is
in fact a separate subject (and others insist, for instance, that there is no such thing as
informal logic), what you get in any of these courses has more to do with who is teaching
the course than it does with what the course is called.
What these courses have in common is a shared goal, which can be roughly stated as follows:
The goal of this course is to provide students with a set of intellectual tools and the
ability to use them to understand and evaluate arguments.
• Notice that the goal is to provide a set of tools and the ability to use them. This is an
important difference between critical thinking and most other subjects of study. In a
course in, say, history, the goal is primarily to teach you some history, and then
(perhaps) secondarily to teach you the techniques of doing historical research.
• There is some theoretical material covered in any critical thinking course. It's just
that this material is of secondary importance, in the sense that it is introduced
because it will help students develop the skills that are the primary goal of the
course.
• Obviously, these skills are only important if arguments are important. We'll have
more to say about why and when arguments are important below.
• It's a mistake to think that the only purpose of having such skills is that it will make
you better able to criticize others.
(1) First, the ability to analyse someone's arguments often helps you understand
difficult or complicated material by allowing you to see why it is organized in
the way it is. It is then easier to break it into reasonably sized chunks which
are themselves less complicated and so easier to digest.
PHIL 145 LECTURE NOTES 2
(2) We very often want to offer arguments of our own, and one of the best ways
to make sure we offer good arguments is to turn these same critical thinking
skills on our own attempts, since this will tell us where our arguments need to
be improved (or even abandoned!).
Of course, a course can't teach you to think, critically or otherwise. It can offer you a bit
of theory and some techniques, but like any attempt to teach people skills it is really up to
the student to put the techniques into practice.
Learning the techniques that can be covered in a first course in critical thinking is
not nearly as difficult as, say, learning to play the piano, where a lot of work is required
of anyone and only a select few have the talent to do it really well. It's more like learning
to drive a car, where the great majority of people can learn to do it passably well when
they are paying attention to what they are doing. Just as only a few have the natural
ability, the dedication, the opportunity, and the desire to do anything like being a
professional race car driver, there are a few people around (who have the inclination,
have had the opportunities, and, perhaps, who have certain talents) who are truly
outstanding critical thinkers, some of them being the best professional philosophers. But
then, you don't need to be Jacques Villeneuve to drive to the corner and buy groceries,
and you don't need to be a philosophical genius to think critically in those cases that have
an impact on most people's lives.
While the goal of a critical thinking course it to give students an opportunity to learn how
to think critically, there are a number of different strategies for doing so. Here are some
of the choices that must be made by teachers of courses in critical thinking.
PHIL 145 LECTURE NOTES 3
2. Teachers need to decide what sorts of examples they will have their students
practice on. Letters to the editor provide a great stock of short, simple arguments,
and the arguments are very often bad ones, so students can easily practice their
skills identifying the arguments and where they go wrong.
But…
The goal is to allow students to evaluate all arguments, not just short and simple
ones. So some teachers prefer to have students work on more complicated
arguments, or those constructed by those who are likely to produce good arguments
because it is what they do for a living (academics, research scientists or columnists,
for instance).
4. Finally, some teachers find that it helps keep students interested if you get them
working on problems that they probably care about deeply.
But…
Other teachers think this is counterproductive in a critical thinking class. People
typically find it very hard to see problems with any argument in support of a view
they believe in strongly, and find it hard to see past the fact that someone else's
view is clearly wrong to ask the question of why it is wrong. So these teachers will
typically have students work on topics that are less likely to be ones they feel
strongly about, with the hope that once they have developed the skills needed to
evaluate these cases they will be more likely to be able to apply them in the cases
that they care deeply about.
PHIL 145 LECTURE NOTES 4
In this course we will try to take a middle position on each of these choices.
• We will introduce a few common fallacies, but our focus shall be on characterizing
good arguments in general and the fallacies will be considered only as common
ways for arguments to fail to live up to these standards.
• We will not go out of our way to find controversial examples in hopes of stirring
students up, but we won't shy away from them. It is a very worthwhile skill to be
able to distance yourself from the subject at hand when the time comes to evaluate
how good a particular argument is, so that you give a fair hearing to those you
disagree with, and so that you can reject bad arguments for views you hold. You
might as well get some practice at that skill along with the others.
• We will not focus on a particular sort of subject matter, because I hope the
techniques we will cover can benefit people in any field that requires a clear head.
However, we will spend a couple of lectures focussing on some particular areas
where people in modern industrialized societies are particularly vulnerable to being
misled, and so to exploitation.
• The text we will use contains many examples which began as letters to the editor in
daily newspapers. We will try to consider other, less simple examples in the lectures
and notes.
There are many reasons which might be advanced for taking a course in critical thinking,
from the very lofty to the purely pragmatic. Here are just three.
2. Not just philosophy, but nearly any academic discipline is built around the
production and evaluation of arguments. This is fairly obvious is disciplines like
history, where many students find it a great liberation to get to university and find
out how much of professional history is consumed with debates between well-
informed historians about the same events that had been presented in high school
history as nothing but a dry collection of isolated facts and dates. But it is true even
in the most empirical of sciences, where mere presentation of experimental results
very often gives way to debate about what the data means. So learning the
techniques of good argumentation will be a benefit whatever academic discipline
one pursues.
PHIL 145 LECTURE NOTES 5
3. One sometimes sees critical thinking texts with titles like Logical Self-Defence. If
you can guard against being persuaded when you should not, rationally speaking, be
persuaded, then you are much less likely to be exploited by those who would like to
sell you defective products or ideas. So critical thinking skills also have a rather
direct personal payoff.
I have chosen this particular text for a few reasons. It is better written than most such
texts, has a wealth of examples and exercises, and for the most part takes an approach to
teaching critical thinking that I find congenial.
One of the lessons of any course of critical thinking should be, though, that it is
appropriate to apply your critical skills in pretty much any context in which someone is
trying to persuade you of something. The textbook is, in a sense, an attempt to persuade
you that certain approaches are the right ones for evaluating arguments. I have yet to
come across a perfect textbook in any subject, and ours is no exception. I won't hesitate
to point out some cases where I think the author has made mistakes. Indeed, we are not
even going to look at the chapters on formal logic in the text, and will use notes I have
written instead, because I think those chapters of the text are inadequate.
The same point obviously also applies to my notes and lectures. They are not
something you should simply try to absorb, but are something you can consider
rationally. Indeed, thinking about what is said, rather than just trying to commit it to
memory, is something that I suspect will make it much easier to really understand the
material.
PHIL 145 LECTURE NOTES 6
Lecture 2
It should be very clear from what was said in Lecture One that we are going to spend a
lot of time talking about arguments, in particular about how to tell good ones from bad
ones. As we'll see the concepts of premises and conclusions play a crucial part is saying
what an argument is. It's probably a good idea, then, in the interest of knowing what we're
talking about, to spend some time on the questions:
What is an argument?
How can I tell what the premises and conclusion of a particular argument are?
How can I tell in any particular situation whether a person is trying to offer an argument
or is really doing something else (such as trying to explain something)?
What is an Argument?
Like many words in English, there are many different kinds of things that get called
arguments. While there is some relationship between these different uses of the word
"argument", we are interested in one of these kinds in particular, and so we should begin
by distinguishing the kind of thing we are interested in from its cousins.
1. We often talk about people getting into arguments with one another. While they
might employ arguments in the sense we care about when they do so, this is not the
sense of the word of interest to us. "Getting into arguments" means having a fight,
which is not the subject we are interested in.
2. Sometimes people say that someone who has offered particularly poor reasons for
something "didn't give an argument at all." We will regard this as a sort of
hyperbolic statement which is not intended to be literally true. That is, we will take
it to mean that the person offered only a very bad argument (so bad, in fact, that for
all its persuasive power it might as well not have been offered at all). Taken
literally, this sense of "argument" means something like "minimally reasonable
argument." We will use the word "argument" in a way which allows for the
possibility of very bad arguments.
Let's try to say what an argument, in the sense we care about, is, rather than just what it is
not.
Remarks:
• This definition is more abstract than some definitions of "argument" that you will
see, including the one in the textbook. Some authors define "argument" in such a
way that arguments only exist when there is a person who puts them forward in an
attempt to persuade someone of something. I think there are important
philosophical reasons for not saying this (for instance, it makes it hard to explain
what it means to say things like "Well, she offered a bad argument, but there is a
very good argument for the same conclusion that she could have offered"), but we
won't digress to go into those. The upshot of using the more abstract definition is
that (when being very careful) the author of the textbook should (for instance) ask
whether a particular passage or utterance is an argument, while we should ask
whether the author or speaker intends to give an argument when she or he writes or
says it.
• As will become clear in the next lecture, this definition papers over a lot of the
interesting structure of arguments. When you consider that people often devote
rather long books to presenting a single argument, this should be obvious. In
particular, it is a deliberate simplification to treat all the premises as being on the
same level. Very often some of these will themselves be conclusions of smaller
arguments (we will call them subconclusions of subarguments) that form part of the
argument as a whole.
• Note that an argument is a set of declarative sentences. Often when people express
arguments they will use non-declarative sentences, for instance when they use
rhetorical questions. When they do so, though, the content of what they are
conveying by doing so can also be expressed directly (even if less forcefully, or less
humourously) by a declarative sentence.
While in some courses, for instance courses in formal logic, the topic is arguments in
general, in a critical thinking course we are mostly concerned with arguments that are
presented by particular people at particular times. The first thing we will consider is the
question of how to tell just which argument someone is presenting, assuming that what
she is trying to do is present some argument or other.
Whenever we use language to communicate, the reader or hearer of a bit of
language needs to figure out certain things about the intentions of the writer or speaker if
he's going to understand what is meant the passage or utterance in question. The
writer/speaker has various tools at her disposal to provide evidence to the reader/hearer of
just what her intentions are.
For our purposes, it is worthwhile to look at some of the evidence that can help us
figure out what argument an author is trying to express by writing a particular passage.
We will postpone to the next section the question of how we can tell if an author intends
to give an argument at all, or if she's doing something different.
PHIL 145 LECTURE NOTES 8
• Many words and phrases in English are used to indicate that what follows is
intended as a premise (i.e., as one of the reasons being offered in support of the
conclusion). Familiar examples are "since", "because", and "for the reason that".
Others, such as "therefore", "thus", "hence", "whence", and "so" are used to indicate
a conclusion. These are therefore called indicator words.
• Indicator words are usually a very good clue about what is going on in a passage,
but you need to be careful with them for several reasons. (A) These words are
sometimes used for other purposes. (B) Recall the simplifying assumption we made
earlier about having only one conclusion in an argument. When there are
subarguments, you will often find conclusion indicator words in front of something
other than the ultimate conclusion of the argument the author is presenting. (C)
Very often an author will decide that no indicator word in necessary to indicate,
e.g., the conclusion of her argument if other features of the context in which she is
presenting her argument should make it clear. For instance, if an argument takes
place in an essay entitled "a reply to Professor X", the author might well take it to
be obvious that the conclusion is that Professor X is wrong, and so won't preface
that claim with "therefore".
• Especially when trying to figure out what the conclusion of a particular argument is,
you very often must rely on what you know (or can find out) about the context in
which the author wrote.
• It is typically easier to figure out what the author's conclusion is first, then to try to
sort out what her premises are.
• Once you have identified the conclusion of an argument, that information can often
help you figure out what the author intends her premises to be. To make use of this
information, you should assume that the author is intelligent and is trying to give a
strong argument, ask yourself what is the way of reading the text which makes the
author seem as rational as possible. This is called the "principle of charity", and is
simply a case of giving the author the benefit of the doubt.
The principle of charity more often than not helps you uncover an author's real
intentions (consider how often you have trouble expressing yourself as clearly as
you would like). However, even in cases where you later find out the author really
did intend the less reasonable of two possible readings of what they said, you won't
have wasted your time by considering the stronger argument. You will, after all,
have considered a stronger argument than the one the person had to offer, and so
will have had something more substantial to think about.
This question really needs to be answered before the one asked in the preceding section
even arises. In most contexts it is an easier question to answer (for instance, it's usually
pretty easy to tell when someone you are talking to is trying to convince you of
something), but there are some cases where it can become difficult to tell whether an
author, in particular, is giving an argument or is doing something else.
PHIL 145 LECTURE NOTES 9
When asking whether a passage is being used to give an argument or not, you are asking
a question about the intentions of the author of that passage. Notoriously, judging
people's intentions is something people are better at doing than at theorizing about. The
best short advice about how to answer the question is to tell you to ask yourself: Is the
author trying to establish some claim, stated or unstated, as true? If the answer is yes,
then the author is presenting an argument.
Very often, as we have said, the author will be doing something else besides trying to
establish that some claim is true. In that case, the author is doing something else besides
giving an argument.
Again, you are probably already pretty good at judging people's intentions in everyday
life. There are some things to look for when considering written passages to help you
figure out what the authors intentions are in that sort of case.
• In an explanation, there is some event or fact that is taken for granted, and the
author tries to explain how it came to be or why it is so. The event or fact to be
explained is not called into question. When an argument is presented, the
conclusion is usually regarded as contentious or open to doubt, or for some other
reason as needing defense.
• Whether something can be taken for granted or not often depends on the context. If
I'm writing for an audience of professional philosophers, I can assume that they
know things (e.g., "Descartes was a rationalist") that it would not be appropriate to
assume as known if I was writing for an audience of first-year philosophy students.
Make use of this sort of information, if you have it, when trying to figure out the
motives of an author in any particular case.
PHIL 145 LECTURE NOTES 10
The text contains several exercises which will allow you to practice answering the
question of whether a passage contains an argument or not. It is probably worthwhile to
work through several of these until you feel confident answering the question of whether
an author intends to give an argument or not.
PHIL 145 LECTURE NOTES 11
Lecture 3
Standardizing Arguments
We noted in Lecture 2 that the official definition of an argument (as a set of declarative
sentences, one of which is identified as a conclusion while the rest are premises), while
correct, does not take into account the fact that there can be many more relationships
between the sentences which make up an argument than just the fact that the premises are
taken to support the conclusion.
We have already mentioned that many arguments contain subarguments, and so that
some of the sentences in an argument play a sort of dual role — they are premises of the
larger argument, but also conclusions of a smaller subset of the sentences which can itself
be considered an argument in its own right. As we will in see in Lecture 4, there are also
different ways premises can offer support to a conclusion. Before we can properly
evaluate how good an argument is, we need to identify this extra structure.
The goal of the next three lectures is to introduce some tools for representing this
structure in a way that makes it easy to keep track of it, and so to make the job of
evaluating arguments easier. In particular, we will look at
Lectures 3, 4 and 5 go together. Lecture 3 will introduce the theoretical material around
the first three of these questions, while lecture 4 will do the same for the last two. Lecture
5 will be devoted to working through several examples to put this theory into practice.
Similar phenomena of course arise in the case of arguments, and this is one of the
things we must keep in mind when trying to answer the question of just which argument
someone is expressing. We must remember that the correct answer to that question is not
always to be discovered by taking everything she says literally, that what are,
grammatically speaking, questions are not always intended to ask you things, and so on.
When standardizing arguments we will take the approach of translating the
premises and conclusion into declarative sentences which can be interpreted literally.
This often robs the speaker's original of its humour or literary flair, but humour and
literary flair have little to do with the evaluation of how persuasive their argument is,
rationally speaking, so that is not a significant loss for the limited purposes of critical
thinking.
Identifying Conclusion
• It often turns out that one cannot identify the premises of an argument until after the
conclusion has been identified.
In short, the task of identifying a conclusion arises when you have already determined
that someone is trying to convince you (or someone else) of the truth of some claim. The
conclusion is the answer to the question: What claim is it that this person is trying to
convince me (or whoever) of?
2. You need to consider what you know about the context in which the author was
writing or speaking, because this can help you figure out what they intended to
show.
3. You need to try to figure out how definite the claim is that she is trying to establish:
does she contend that something is certainly true, or that it is likely, or perhaps only
that it cannot be ruled out?
PHIL 145 LECTURE NOTES 13
4. You need to try to figure out the scope of the conclusion: is she trying to prove that
something is true in all cases, most cases, or only a few cases, or merely that it
applies in at least one case?
Identifying Premises
Typically the question of identifying the premises of an argument arises when you've
already determined both that an argument is being expressed and what the main
conclusion of the argument is. The question you are trying to answer when you try to
identify the premises is: What claims is the arguer making which are supposed to support
her contention that the conclusion is true?
There are several things to take into account when identifying premises.
1. When people intend to express an argument, that's very seldom their only intention.
They might also want to entertain their audience, or to show their erudition (perhaps
by using words like 'erudition'?), to express not merely the view that something is
true but also their feeling that anyone who disagrees is morally repugnant, or any of
an unlimited number of other things. So, when identifying the argument they are
presenting you need to disentangle the premises from
a) claims that are not seriously intended as part of the argument
b) asides, background information, editorial remarks, etc.
c) rhetorical flourishes and stylistic indicators such as "It has long been agreed
that," or "I humbly submit."
d) Expressions of emotion or judgment that are not part of the content of the
argument ("Hume defends the repugnant view that ...").
2. As with conclusions, you can't expect all the premises to be expressed by using
declarative sentences. However, the premises are claims, so for our purposes you
should translate what is expressed into an equivalent declarative sentence.
3. Again, indicator words can help identify premises. However, even more than in the
case of conclusions, you need to use caution. People are typically less careful to use
indicator words for premises than they are for conclusions.
5. Often only some of the premises of the argument are expressed. We will devote a
section to the problem of identifying unstated premises below, so will merely
mention the problem here.
PHIL 145 LECTURE NOTES 14
Standardizing Arguments
The procedure for standardizing arguments is described in the text, so we will mostly
emphasize some of the key points in these notes.
Once you have identified the premises and conclusion of the argument someone has
presented, it is very useful to lay the argument out in a way that is easy to understand
before proceeding to the next task, namely evaluating the argument. This is what the
process of putting the argument in standard form is for.
It is very important to ensure that each premise and the conclusion of the standardized
version of an argument are presented as a self-contained and complete declarative
sentence. The reason this is important has to do with the reason we are standardizing the
argument in the first place.
We standardize to make the task of evaluating the argument easier. Part of the task
of evaluating the argument is to consider each premise in isolation and ask whether it is
something we should accept, i.e. whether we should believe that it is true. Making sure
the premises are expressed as declarative sentences makes it much easier to see exactly
what is being claimed, and so to evaluate how reasonable the claim is. Making sure that
(as much as is possible) the reference of pronouns is clear from the statement of the
individual premise (and so, in particular, doesn't require that the reader of your
standardization refer to some earlier premise to see what that reference is) makes it easier
to consider the premise individually.
We number each premise for similar reasons ... it is much easier when discussing
the argument later on to be able to refer to premise 3, for instance, rather than having to
rewrite the premise in whole or in part.
You must ensure for the conclusion and each individual premise, considered in isolation,
that it does not itself express an argument. Failing to do this will mean that we cannot
properly identify the flaw in an argument which goes wrong, if it turns out that there is a
problem with the subargument which is hidden in the individual premise or conclusion of
your standardized version of the argument.
But don't confuse this injunction against including whole arguments in premises or
conclusions with an injunction against conditional statements. When someone says, for
instance, "If dogs were birds, they'd be able to fly", they are not making an argument.
Instead, they are making a claim which might be true or false. Conditional statements are
very often important parts of arguments, and trying to represent arguments which use
them without including an entire conditional statement as one of the premises makes
arguments look a lot worse than they really are.
PHIL 145 LECTURE NOTES 15
For example, if someone argues that "If dogs were birds, they'd be able to fly. So
they're obviously not birds, since they can't fly." we ought to standardize this as:
This second version makes no sense at all. Worse, it also says that the arguer claims that
dogs are birds (i.e. that premise 1 is true), but the arguer does nothing of the sort!
Missing Premises
The phenomenon of "missing premises" arises for a number of reasons. Sometimes when
people are expressing an argument they regard some of the claims as "common
knowledge", the kind of thing their reader or listener can be safely assumed to already
know, and so as not needing to be stated. Sometimes a passage you are analyzing is
pulled from a larger context, and the author of a book from which the argument is taken
might well have defended some general principle earlier in the book and so not have felt
the need to state it again. And sometimes an arguer might assume that the part of the
argument that is made explicit should suffice to make clear that she is also committed to
some other claim and so that the other claim does not need to be stated.
There might be "logical gaps" in an argument for any of these reasons. However, even
though we're using the "principle of charity" when deciding what argument someone is
offering in any particular case, we still need to recognize the fact that sometimes people
do express bad arguments. Part of the task of deciding whether or not to standardize an
argument by adding a missing premise is deciding whether the arguer intends to express
an argument of which that missing premise is a part, or whether they intend to present an
argument which happens to have a logical gap in it.
Remark: There's no real conflict between our advocating the principle of charity when
deciding what argument is being offered and our suggestion that we sometimes should
represent people as presenting arguments with logical gaps in them. It is quite common
that any premise we might add to the argument which would fill this logical gap is a
particularly implausible one, and so it is more charitable to represent the arguer as not
having spotted the gap in her argument than as accepting such an implausible claim.
PHIL 145 LECTURE NOTES 16
We will follow the policy of standardizing arguments with "missing premise" filled in
(indicating that the premise in question is a missing premise by underlining the number,
e.g. '1') only when we have some good reason to think that the arguer accepts that
premise or is committed to it. If a premise really is common knowledge, it might not need
to be stated at all.
However, in the case of arguments presented by historical figures, what they took to
be common knowledge might be something we do not accept at all, or something that is
not nowadays common knowledge. Moreover, when someone is directing her argument
to a particular audience, what she counts as common knowledge might be different than
if she'd directed the argument to someone else.
(Presumably someone presenting an argument to doctors about which medication
ought to be prescribed in a particular situation, for instance, doesn't need to pause to fill
in the audience about how the circulation of the blood would transmit an intravenously
delivered drug throughout the body, even if that were a crucial part of the argument in
support of using it. On the other hand, she might well need to make this explicit if she
were trying to convince a group of non-specialists.) In such cases it is probably legitimate
to add the premises to the standardized version, since it eliminates a gap by adding a
premise the arguer would no doubt accept, and yet it is not the sort of logical overkill one
gets from adding premises like "the sun rises in the east."
Our goal in standardizing is to put the arguer's argument into a form which is readily
evaluated. This requires that we consider the argument that person actually presents,
needless to say. But there are always several ways we could fill a logical gap in an
argument by filling in an extra premise. If we've decided that we should add an extra
premise, but there is no evidence which tells us which of these possibilities the author
intends, the principle of charity tells us that we ought to add the most reasonable premise
which fills the gap. This is the most sensible premise to add because it ensures that when
we evaluate the argument any flaw we find in it is one we can attribute to the arguer, and
is not a problem we have introduced into the argument ourselves when we added the
premise.
PHIL 145 LECTURE NOTES 17
Lecture 4
Diagraming Arguments
The technique of putting arguments into standardized form which was described in
Lecture 3 is very useful as a tool for understanding arguments as a prelude to evaluating
them. For many purposes it is sufficient. However, especially in the case of longer and
more complicated arguments, the limitations of standardized form can be a problem. In
this lecture we introduce some tools that will allow us to represent more of the structure
of arguments in a way that many people find very useful.
In arguments with two or more premises, we need to consider whether two or more of the
premises are intended to link to provide support for the conclusion (or for a
subconclusion) that they would not support alone, or whether they are independent of one
another, and so provide convergent support. We will also briefly consider the role of
counterconsiderations.
With a little practice the difference between linked and convergent premises will become
clearer, though it is not a trivial matter to distinguish them at first. Let's try to see why.
Premises proved linked support if taken together they support the conclusion, but taken
individually they do not. The premises work together to support the conclusion, so to
speak.
On the other hand, they provide convergent support if each individually provides support
for a conclusion. In such cases the arguer is presumably offering more than one premise
because they think that there is some cumulative effect, which is to say that they think
having more than one premise, each of which provides some support for the conclusion,
means that you end up with more support for the conclusion than you would get by taking
only one of the premises.
Taking these two definitions into account, there are a couple of different reasons that it is
difficult to disentangle linked and convergent support cases when you are first starting
out. Consider the argument:
PHIL 145 LECTURE NOTES 18
It is not hard to see that the first premise by itself provides no support for the conclusion.
This is a claim which could well be true (because, for instance, my dog sleeps on my bed
all day long) without providing any support for the conclusion. The dog's sleeping habits
only affect the likelihood of fleas in the bed if she in fact has fleas, and premise 1 makes
no commitment one way or another on the question of whether that is true.
The reason the mini-argument of the last paragraph seems reasonable is that it is easy to
imagine that the speaker and his dog live a lifestyle that makes something like premise 1
very likely to be true. Consider, for instance, that a farm dog that lives full-time in a barn
can be flea-ridden without that having any implications at all for the beds of any people
who don't live in the barn with him! Rather than concluding that the abbreviated
argument shows that 2 provides support for 3 independent of 1, we ought to conclude that
it would be reasonable to standardize the abbreviated argument by adding something like
premise 1 as an unstated premise.
It can also be tempting to regard convergent premises as though they were linked. This
can be problematic because it is tempting to confuse the fact that the various convergent
premises in an argument are (often) supposed to provide more support than anyone
would alone with their depending on one another.
For example, if someone argues that "Debbie's very bright. She got an A+ in her physics
course, and another one in history," has two premises, namely "Debbie got an A+ in her
physics course" and "Debbie got an A+ in her history course". Taken together these
certainly offer more support for the conclusion that Debbie is very bright than either
would alone. However, each by itself offers support for the conclusion. An A+ in physics
would be some evidence of brightness even if she hadn't taken a course in history at all,
and vice-versa.
Counterconsiderations
There is a pattern of argument which is fairly common but which can be represented in
standardized form only with some unnaturalness. Sometimes a person is in a position
where the "burden of proof" rests with the view they are rejecting. In such a case she
doesn't need to offer any positive reason to support her view. It is enough to show that the
reasons put forward by her opponent are not sufficient to establish the opponent's case.
PHIL 145 LECTURE NOTES 19
Consider the following rather outlandish example. Suppose I tried to convince you that
there were miniature, invisible dancing elephants on top of the desk in front of you. It is
hardly reasonable in such a situation for me to insist that you prove that there are no such
elephants. Again, it wouldn't be reasonable for me to say, "Look, I can't prove to you that
they're there, but you can't prove that they're not, either. So I guess it's a toss-up as to
who's right, and we'd better just withhold judgement." If you can establish that I have no
good reason for my claim, that is enough to establish your case.
A familiar situation of this sort is in criminal cases in a court of law, where an accused is
considered innocent until proved guilty. In this case the burden of proof is on the
prosecution, and all the defense needs to do to win the case is rebut the arguments put
forward to establish the guilt of the defendant. Another (rather less clear cut) case is
where one person is defending the "conventional wisdom" on some subject matter. The
burden of proof is often taken to rest with the person who is challenging the conventional
wisdom. However, it is often a contentious matter whether there is a conventional
wisdom on some matter, and on which side it resides, so it's not that uncommon to see
debates about just where the burden of proof resides in a particular case.
An argument in such a case is likely to go like this: "Person X offers reasons A, B and C
for his claim D. But A is a bad reason because ..., while B is bad because ... and C is bad
because… Therefore D is not something we should believe." As we have mentioned, this
is not so easy to represent in standardized form, since there are no claims made which
directly support the conclusion that D is not true.
(Probably the easiest way to standardize would be to add an unstated premise to the
effect that "In the absence of some good reason to believe D, it is something we should
not believe".)
It is useful to be able to represent this sort of argument more directly. Thus we will want
an ability to represent claims which weigh against the conclusion being argued for. We
call such claims counterconsiderations. Of course, counterconsiderations will usually be
accompanied by the author's own reasons to reject those claims, which we might be
tempted to call "countercounterconsiderations". Instead of tripping over this mouthful,
though, we will call these reasons rebuttals.
Combinations
As we will see, counterconsiderations arise in many more circumstances than just those
cases where the burden of proof means that someone only needs to rebut someone else's
arguments. In constructing longer arguments it is good practice to consider not only your
own reasons for holding to your own view, but also to carefully consider the reasons
others offer for their contrary views. So the tools we will develop for dealing with
counterconsiderations are applicable in a wide variety of cases.
Indeed, you can expect to see all sorts of combinations of premises and conclusions in the
arguments we encounter. There will often be, for instance, distinct groups of premises
which provide linked support for various subconclusions, but each of these
subconclusions provides convergent support for the main conclusion. Or, of course, it can
happen the other way around. And in the case of longer arguments, as one might find in a
book where someone defends some controversial claim, the overall argument will
PHIL 145 LECTURE NOTES 20
typically include several distinct strands of argument in support of the main conclusion,
together with rebuttals of several counterconsiderations that have been offered in support
of contrary claims by earlier authors. Being able to represent these various relationships
between premises can help us keep better track of what is going on in such long and
complex arguments, and so to better understand them.
Diagraming
The techniques we will use for diagraming arguments are simple. We begin, as when
standardizing arguments, by stating all the relevant claims (premises, conclusion,
counterconsiderations and rebuttals) as complete and self-contained declarative
sentences, and assigning a number to each of them. However, rather than merely listing
them in a linear order, simply using a few indicator words to indicate what the
subarguments are as we do in standardization, we draw a chart which will tell us more
about how these claims are supposed to fit together.
1. For each sentence in our list, a circle with its number should appear somewhere in
the diagram. It's a good idea to put the main conclusion at the bottom, since this
usually results in easier to read diagrams.
2. The basic idea for diagraming is that we will draw an arrow between premises and
what they are intended to directly support. That is, we will continue to represent the
subargument structure of arguments. Instead of using indicator words, we will have
arrows from premises to the subconclusions they support. A subconclusion will
differ from a main conclusion in a diagram because arrows will both start and end at
a subconclusion, while they will only end at the main conclusion.
3. But if two or more premises provide linked support, we will indicate this by putting
+ between the circles for these premises, then draw only a single arrow from the set
to the conclusion or subconclusion it supports, rather than one from each premise.
5. Finally: Rebuttals will be represented by drawing a straight line with a circle at its
head (rather than an arrowhead) from the number of the rebuttal to the wavy line
between the counterconsideration it rebuts and the conclusion.
All of these instructions are the same as those in the textbook except number 5, which is
not discussed in the text at all. We will conclude this lecture with a few examples which
should make these directions clearer.
PHIL 145 LECTURE NOTES 21
Simple linked support: Consider our earlier argument, which used the three claims:
We have also seen an example of a simple argument with a premise which provide
convergent support.
Listen Copper, you've got the wrong guy! You've got one witness who picked me
out of a lineup, but she's blind as a bat! You think my past record makes me a likely
suspect, but purse-snatching ain't like armed robbery at all! And besides, my Ma
will tell you I was practicin' with the church choir at the time of the robbery.
PHIL 145 LECTURE NOTES 22
Here, the squiggly lines indicate that 1 and 3 are counterconsiderations, while the circles
at the end of the lines between 2 and 4+5 and the squiggly lines indicate that 2 by itself is
intended as a rebuttal of 1, while 4 and 5 link together to rebut 3. Finally, 6 is a premise
in support of the conclusion rather than being either a counterconsideration or part of a
rebuttal.
Lecture 5
This lecture will be devoted to working through several examples. In the past three
lectures we have considered: how one determines whether someone is presenting an
argument or not in a given passage; how to standardize an argument that someone has
presented; how to diagram the argument and thereby reveal more about its internal
structure. For each example, we will do all three parts, if the example is indeed one where
someone is presenting an argument. If it is not such a case, we will answer the question
"What the heck is she doing in that passage, then?"
(Note that in some cases doing all this is more than is asked for by the textbook
instructions for the set of exercises the example is taken from.)
Example 1: "Some people who came to Canada from India for economic reasons are
now returning to India, again for economic reasons."
The author is not presenting an argument here. This is merely a statement of an (alleged)
fact.
(Of course, this sentence could serve as a premise of an argument. But we have no
evidence that it is being used in this way here. And there is certainly no argument given
in this small passage.)
Example 2: "The woman who took the lead role in the film was quietly beautiful, as
everyone noticed right away. She was tall, with red hair and green eyes. The flowing
costumes she wore were elegant and creatively styled."
It seems unlikely that the author intends to be presenting an argument here. She states
that the lead actor was beautiful, and gives some description of her and her costumes.
However, there is no indication in this passage that she is using these facts as evidence to
convince us that the actor was indeed beautiful. It is more likely that this is merely filling
in more detail to allow us to paint a better mental picture of the actor.
The author is obviously presenting an argument here. One tip-off is the use of the
indicator word "therefore", which tells us that the last sentence gives us the conclusion.
The other sentences, which are clearly premises, are both declarative in form, and neither
has a pronoun with an unspecified referent. (The "he" in the second refers back to "a
person". The usage is anaphoric, so we could reword the sentence to say "..., that person
cannot control", but since the referent is specified within that particular sentence this is
not necessary.)
PHIL 145 LECTURE NOTES 24
As for diagraming the argument, the question we must answer is whether the premises
supply linked or convergent support. The test is, you will recall, that we need to ask
whether the premises provide support for the conclusion in isolation, or if they are only
relevant to the conclusion if considered together. And in this case the premises must be
taken jointly. If it weren't the case that dreams happened when the dreamer is sleeping,
the fact that sleeping people can't control their minds wouldn't support the conclusion at
all. And if the second premise were not true, then the fact that dreamers are asleep
wouldn't mean that they couldn't control their dreams. So we have linked support.
Conveniently, the standardized argument gives us numbers for the premises and
conclusion, so we can diagram the argument as follows:
So far these examples have been rather easy. Let's consider some that might be a bit more
difficult.
Example 4: "An ant is crawling on a patch of sand. As it crawls, it traces a line in the
sand. By pure chance, the line that it traces curves and recrosses itself in such a way that
it ends up looking like a recognizable caricature of Winston Churchill. Has the ant traced
a picture of Winston Churchill? A picture that depicts Churchill? Most people would say,
on a little reflection, that it has not." (From Hilary Putnam, "Brains in a Vat.")
Putnam is not presenting an argument in this passage. He raises an interesting question about
which people might well disagree (i.e., the question of whether or not the ant has traced a
picture in this case), but he does not offer any reasons for thinking one or the other answer is
correct. He has stated the opinion that (he says) most people would come to. But he does not
do this to try to convince his reader that this is the correct response. He merely reports it.
PHIL 145 LECTURE NOTES 25
While he doesn't use any indicator words, the author of this passage does seem to be
presenting an argument. If you consider the question "Why would the author list this
particular collection of claims in this way?" I think you'll agree that he is offering reasons
in the second through fourth sentences for the claim he makes in the first sentence. So his
conclusion is that "Britain is no longer a Christian country and makes no pretense of
being one."
What about a standardization? I think each of the next three sentences is a separate
premise, and there do not seem to me to be any subarguments. What about the last
sentence? I'm not sure exactly what the author is saying. While "pagan" is sometimes
taken to mean some or other sort of nature worship, that's presumably not what the author
has in mind. He presumably means by "pagan" not subscribing to any major religion, or
more specifically not subscribing to the Christian religion. So I would be inclined to
simply regard the final statement as a restatement of the conclusion, which was stated
more straightforwardly in the first sentence.
When we standardize, we should do a few things to make sure the declarative
statements we put for each premise are self-contained, i.e., are such that they can be
understood without having to read the other sentences. So "the nation" in the second
sentence should be replaced by "Britain". We might reword the third sentence to take
away the editorial content involved in calling shoppers "congregations". So, I would
standardize the argument as follows:
For diagraming, the question again is whether these premises provide linked or
convergent support. In this case, each is relevant to the conclusion, even in the absence of
the others. Churches closing, for instance, is evidence both that Britain might once have
been a Christian country and that it is not one now. Even if the Churches were still open,
the fact that people don't go to church indicates a lessening in their commitment to
Christianity, and the fact that they go shopping shows that they are no longer pretending
to be Christian, either. Finally, the fact that God doesn't get much attention (presumably
the author thinks God doesn't get much attention in Cuba) by itself gives us some reason
to accept the conclusion.
PHIL 145 LECTURE NOTES 26
Example 6: "The application of the physical and biological sciences alone will not solve
our problems because the solutions lie in another field. Better contraceptives will control
population only if people use them. New weapons may offset new defenses and vice
versa, but a nuclear holocaust can be prevented only if the conditions under which
nations make war can be changed. New methods of agriculture and medicine will not
help if they are not practiced, and housing is a matter not only of buildings and cities but
of how people live. Overcrowding can be corrected only by inducing people not to
crowd, and the environment will continue to deteriorate until polluting practices are
abandoned. In short, we need to make vast changes in human behavior, and we cannot
make them with the help of nothing more than physics or biology, no matter how hard we
try."(B. F. Skinner, Beyond Freedom and Dignity.)
This example is clearly a good deal trickier! An important first step is to read the passage
carefully and try to be clear about what the author is saying. While the details need to be
worked out, I think that after you read the passage carefully you'll agree that Skinner's
argument is intended to show that there are limitations on what we can expect from
physics and biology as tools for solving social and political problems.
Now, let's try to see how his various claims fit together. First, at the start we have a
sentence which includes the indicator word "because". So that suggests that the claim
"The solutions to our problems lie in a field other than physics and biology" is intended
to support the claim "The application of physics and biology alone will not solve our
problems."
I think a good strategic move at this point is to look at the last sentence. In it
Skinner returns to precisely that topic. In that sentence he says two things (often "and" is
used to join together two claims, and that is what it does in this sentence), namely "We
need to make vast changes in human behavior" and "We cannot make vast changes in
human behavior using only physics and biology". It's pretty clear from the passage that
Skinner means by the first claim that we must make these changes if we want to solve our
problems. So, how do these claims fit together? Let's write them out as full statements:
PHIL 145 LECTURE NOTES 27
It obviously follows from these two claims that if we want to solve our problems, we
cannot do so merely using physics and biology.
This introduces a tricky matter of interpretation. The claim which I have just said
"obviously follows" looks very much like the conclusion of the little subargument we
have already identified. But that raises the question of what the relationship is between
these two sentences and the claim that the solution to our problems lies in another field,
and this is a tricky matter of interpretation.
(A) The way first sentence of the paragraph is presented suggests that Skinner takes what
comes after "because" to be the fundamental reason for his conclusion. One would
expect, then, that the rest of the paragraph would one way or another try to establish that
this "fundamental reason" is something we should agree is the case. So, perhaps these
two sentences a) and b) are intended to provide convergent support for that
subconclusion. The problem with this interpretation is that it introduces an obvious hole
into Skinner's reasoning: it assumes that there is a solution to our problems somewhere
outside physics and biology, and that is not supported at all by those two premises.
(B) We could regard the claim that the solution lies in a different field as a separate
premise which is supposed to offer additional, convergent support to the conclusion that
physics and biology alone cannot solve our problems. The problem with this
interpretation is that it makes that claim into a bit of a sore thumb — Why did Skinner
include it in this paragraph? As we will see, all the rest of the paragraph presents an
argument which runs through a) and b) to establish that physics and biology can't solve
our problems and no mention is made of solutions existing in another field. So while this
approach doesn't mean there is a big hole in the middle of Skinner's argument (it leaves
room for the possibility that he elsewhere provides evidence in support of this
independent premise, for instance), it has a different problem. Why would Skinner, in the
middle of writing a book (which is a different situation from one where he is talking "off
the cuff", and so losing the thread of what he was saying would be more likely) introduce
a clause like that into a passage which contains a rather involved argument that is nicely
tied together but for this one "sore thumb" clause? And why would he put the "because"
in there to flag it as the key reason?
What this judgment call amounts to is a case where it is a tricky matter to apply the
"principle of charity". Is it more charitable to Skinner to attribute to him an argument
with a considerable hole in it, thus suggesting that he didn't see that hole, or is it more
charitable to attribute to him an argument with a separate "sore thumb" premise, thus
sparing him the charge of having an argument with a large hole in it, but instead
indicating that even when he sat down to write a carefully constructed argument he
couldn't avoid introducing a clause which really doesn't belong in the paragraph? I think
either decision could be reasonably supported, and so think that either of two analyses of
this argument would be a perfectly reasonable answer. I will therefore present two
versions of the answer below, one for each decision on this question. (Of course, if you
encounter a case like this on an assignment, you only need to give one answer!)
PHIL 145 LECTURE NOTES 28
The rest of the passage is quite a lot more straightforward. The rest is really a series of
claims, each of which is a claim that for some problem a solution will happen only if
changes can be made in people's behavior. Each of these claims is obviously some reason
to think that if we want to solve some of our problems, we will need to change people's
behavior, and so each on its own would provide support for that claim. So they all will
serve as premises in a convergent subargument with that as its conclusion.
Other things to notice about the rest of the passage include the following: the clause
"New weapons may offset new defenses and vice versa" doesn't really add anything to
the argument. The part of the claim that matters is that "a nuclear holocaust can be
prevented only if ...", and so the "new weapons" clause can be deleted from the
standardized form. Similarly, the claim that housing is not merely a matter of buildings
and cities can be deleted as a mere rhetorical flourish. Several of the sentences can be
stated in a slightly modified way which keeps their meaning intact. I would do this
simply to show that all these premises have the same basic form, but it's not really
essential.
These are very similar. The "hence" in the first indicates that premises 6 and 7 are
premises of a subargument in support of 8. The absence of "hence" in the second shows
that we are taking 6, 7 and 8 to all be premises directly supporting 9.
The diagram for the second interpretation will be the same on the top half (i.e., the circles
for 1, 2, 3, 4 and 5 will all have separate arrows directed at the circle for subconclusion 6).
But on this interpretation the bottom part of the diagram would look like this:
PHIL 145 LECTURE NOTES 30
These examples should have illustrated several points about what is involved in figuring
out the structure of arguments. A few of the key points are these.
1. Sometimes there is more than one reasonable analysis of just which argument is
being presented in a given case. On assignments your job is to give a reasonable
analysis. It won't necessarily be exactly the same as the analysis I come up with.
However, it is important to recognize that the fact that there can be more than one
reasonable account does not mean that there is no distinction between reasonable
and unreasonable ones.
2. The reason there can be more than one analysis of some passages is that the analysis
of arguments requires judgment on the part of the person doing the analysis. This
just isn't a job that can be reduced to merely mechanically going through a set
procedure. This has two important implications. The obvious one is that you need to
make sure that you are thinking while you are engaged in this task. It's easy to slide
into trying to do things by rote, but it won't work for this job. The second is that you
need to do something to indicate why you have made the judgments you have made.
Thus if you add a missing premise, you need to explain why you thought it
justifiable to do so, for instance. The instructions for the exercises in the text (and
on the assignment) prompt you to supply explanations where they are needed. Be
sure to supply them.
3. Not every passage contains an argument, but also not every passage which does
contain an expression of an argument includes obvious clues like indicator words.
PHIL 145 LECTURE NOTES 31
Lecture 6
Evaluating Arguments
When do people typically express an argument? Usually it is when they are trying to
(rationally) convince somebody else of the truth of some claim. Rather than simply
asking this somebody else to take their word for it, they give a set of reasons, or premises.
By simply performing the act of giving an argument with the intention to convince
someone that a conclusion is true, an arguer is committing herself to at least two claims:
1. All the premises are true (or, at least, it is reasonable to believe they are true)
2. The conclusion is adequately supported by the premises.
Condition 2 is deliberately worded a bit vaguely because there are different things that
might be meant by "supported". We sometimes say that the conclusion follows from the
premises, which suggests that the truth of the premises would guarantee that the conclusion
must be true, too. However, sometimes we mean only that the truth of the premises would
be enough to make the conclusion likely to be true. We will investigate different sorts of
support premises can give to conclusions quite extensively in this course.
Suppose, now, that it is you that the argument is directed to, and the conclusion is
something you are inclined to think is not true. If you don't simply decide that the
argument is a good one and so change your mind, you seem to have three options:
The purpose of this lecture is to give an introduction to 1 and 2 (and to 1R and 2R at the
same time). While we adopt most of the material from chapter 4 of the text, there are
several respects in which the discussion in the text is so oversimplified that it is
inaccurate. The places where this discussion disagrees with what is in the text will be
clearly indicated.
PHIL 145 LECTURE NOTES 32
Kinds of Support
We begin by considering different amounts of support that a group of premises can give
to a conclusion.
An argument is valid if it is impossible for these two conditions to be true at once: (1) All
the premises are true; (2) The conclusion is false.
There are other ways to say the same thing, for instance: The truth of all the
premises is sufficient to guarantee the truth of the conclusion; If all the premises were
true, then the conclusion would necessarily be true as well. If you find one of these ways
easier to understand, use it instead of the official definition, since all are equivalent.
The notion of validity (sometimes called deductive validity) is the precise technical
version of what we usually mean when we say things like "the conclusion follows from
the premises".
Here are a couple important things to notice about the concept of validity.
• A valid argument can have false premises! What validity means is that if the
premises were true, the conclusion would have to be true, too. But that's quite
compatible with having false premises. So, e.g.
• The argument would be valid if we changed 2 to "All cats speak Spanish" and 3 to
"Dogs speak Spanish," which shows that valid arguments can have false
conclusions, too, if they have at least one false premise. Indeed, if a valid argument
has a false conclusion, we know that at least one premise is false.
Logic is the science which studies valid inferences. (It is sometimes called "formal logic"
because the word "logic" is used in common parlance to refer to good reasoning more
generally. Sometimes critical thinking courses like this one are called "informal logic".)
Logic has seen an amazing growth since the late 19th Century. We will introduce some of
the basics of logic in Lectures 10-12.
Of course, there is no stronger support a conclusion could get from premises than the
connection that is in place in a valid argument. How could you beat a rock solid
guarantee? But there are many sensible inferences that are not valid ones. For instance, if
I were to reason as follows:
This would seem to be a good inference. Those premises do seem to provide good
evidence for the conclusion. However, they don't guarantee its truth. We can imagine
alternative explanations of that evidence, perhaps involving a practical joke by a
neighbour using tape recordings and so on.
It is because there are arguments like this one, where the premises make the
conclusion probable but do not guarantee it, that we have to say that someone giving an
argument claims that his premises "provide adequate support" for the conclusion. It
simply is not always the case that people intend to give valid arguments, and not all good
arguments are valid ones.
We will consider various types of non-valid but nonetheless adequate support
premises might give to conclusions later in this course. In particular, we will look at
inductive, conductive and analogical arguments.
Accepting Premises
An argument is sound if it meets the following two conditions: (1) all its premises are
true; (2) it is valid.
If you examine the definition of validity we have just stated, you will see that a
sound argument must have a true conclusion. Obviously, then, the ideal thing for an
arguer to do is to present sound arguments.
However, if you think about the advice "if you want to persuade someone, use
sound arguments," you will see that it is problematic. A crucial problem with it is that we
often quite simply are in no position to tell for sure which premises are true and which
are not. For instance, many things once thought to be beyond question in science have
later been discovered to be false. Perhaps the same will happen to our current scientific
theories. But the reason they are our current theories is presumably that they are the ones
we have the most reason to think true, and we have no way of predicting which of our
current beliefs will later be shown to be false (if we knew that, we wouldn't now believe
them!).
The upshot of this is that the advice "use sound arguments" is rather like the advice
to only marry someone who you will be happy with. It is the kind of advice we would all
follow if we could, but, alas, it is advice we are in no position to put into practice (as the
number of unhappy marriages attests).
Since we cannot always tell whether a premise is true or not, we look for a more
practical replacement for the question of whether premises are true or not when the time
comes to answer the question of whether we ought to be persuaded by an argument or
not. The replacement is the question: Is this premise something that it would be
reasonable for me to accept?
When someone asks us whether an argument is sound or not, we often cannot say for
sure. The argument may be valid, and the premises may be ones it is reasonable for us to
accept, and yet we can't say for sure that they are true (if they depend on current scientific
theories, for instance, they are as doubtful as current scientific theories). So the best we
could do is say that it is reasonable to think the argument is sound, and so that it is
reasonable to think the conclusion is true.
PHIL 145 LECTURE NOTES 34
We saw in the last section that there are also arguments where the truth of the
premises would make the conclusion probable even though the argument is not valid. If
the premises of such an argument are either known to be true or are ones it would be
reasonable for us to believe to be true, then that argument, too, would be one which
makes it reasonable for us to believe the conclusion to be true.
It will be handy to have a name for an argument which makes it reasonable for us to
believe that the conclusion is true. We will call these arguments cogent arguments, or
sometimes simply good arguments.
In what follows we will be investigating the question of how to tell whether arguments
are cogent, rather than investigating whether they are sound. The reason, quite simply, is
that the first is a question we can answer, while the second is not.
A handy way of keeping straight what questions need to be asked when evaluating
whether an argument is cogent or not is to consider the ARG conditions. It's easy enough
to remember them because 'arg' is just the string consisting of the first three letters of the
word 'argument'. However, it is more sensible to think of them out of order!
Grounds: Here what we are asking is whether the premises, if they were true, would
provide sufficient support (i.e., whether they supply good grounds) for thinking the
conclusion is true. A few things to bear in mind about this condition:
• We need to ask this question about the premises collectively. If, for instance,
we have two linked premises (e.g., "If my dog has fleas, then my bed is
infested with fleas," and "My dog has fleas"), neither by itself provides
sufficient grounds for concluding that my bed is infested with fleas, but taken
together they make a valid argument for that conclusion! So it is a serious
mistake to reason as follows: premise one isn't sufficient, and premise two
isn't sufficient, so the premises do not provide sufficient grounds for the
conclusion. You must consider the premises as a group!
• In the case of arguments which are not valid, the question of grounds becomes
one of degree. How much support do the premises provide for the conclusion?
Since this is a matter of degree, the question "Do the premises provide
sufficient grounds?" will sometimes clearly be yes, sometimes it will clearly
be no, and sometimes it will be neither clearly yes nor clearly no. (This last
possibility will occur when it provides substantial support for the conclusion,
but not enough that it is clearly correct to say the argument is a strong one.)
PHIL 145 LECTURE NOTES 35
• If the premises of an argument provide linked support for the conclusion, then
you must ask the question about them taken together. Considered alone, the
sentence "Bob is your uncle" is irrelevant to the truth of the claim "Bob is
bald." But considered together with the premise "Everyone in your family is
bald" it is clearly relevant.
• Unlike the case of the Grounds condition, we do not consider all the premises
as a group. Instead, we consider each group of linked premises or, if a premise
is not linked to any other, then we consider that premise in isolation. So if we
add to the previous argument the further claim that "Bob has big feet," this is
irrelevant to the conclusion that Bob is bald, but the other two premises
remain relevant.
It is an oversimplification to think that an argument must satisfy all the ARG conditions
to be cogent. THE TEXTBOOK MAKES THIS MISTAKE.
Suppose that 1 and 2 are true, but you measure your uncle Bob's feet and see that they are
only size 6, so that premise 3 is false. Now let's go through the ARG conditions. Both 1
and 2 are, we are supposing, acceptable, but 3 is not, so premise 3 fails on the A condition.
As we mentioned above, 3 also fails on the R condition. However, the argument does
satisfy the G condition, since by virtue of 1 and 2 the argument is valid. Furthermore,
since both 1 and 2 are true, the conclusion is true, too! So the argument demonstrates that
the conclusion is true beyond the shadow of a doubt (remember, we are assuming that we
know that 1 and 2 are true). It is therefore a cogent argument, even though premise 3 fails
both the A and R conditions.
Step two: consider whether the argument is valid. If no, go to step three. If yes, both the R
and G conditions are met. Now ask: is it still valid if we eliminate all the premises which
fail to meet the A condition? If yes, the argument is cogent, and we can stop here. If no,
go to step three.
Step three: The argument is not valid (at least when only acceptable premises are
considered). Consider whether the premises are relevant. Considering only the premises
which pass both the R and the A condition, how much support do they provide for the
conclusion. If it is enough, then the argument passes the G condition, and the argument is
cogent.
So, there are two major differences between what the text says about using the ARG
conditions and the correct use of them. We insert the question "Is the argument valid?"
into the process at step two. This makes sense because in the case of valid arguments the
question of relevance is settled "automatically", so to speak, and so this can be a real
time-saver. The second part is that we are allowing the possibility that arguments can
include irrelevant or unacceptable premises and still be cogent. We are allowing for the
possibility that we ought to be convinced by an argument even though the arguer couldn't
resist throwing in material that didn't do anything to make her case stronger. It's
important to recognize this possibility because when people actually present arguments
they often include irrelevant or unacceptable premises which don't play any essential role
in their argument, though they often haven't figured this out for themselves.
Let's return, finally, to where we started. Suppose you have investigated someone's
argument and reached a conclusion on whether it is cogent or not. What does this
decision tell you?
First, and very importantly, if you decide that someone's argument is not cogent, that
tells you that the arguer has failed to provide you with good reasons for believing the
conclusion. This by itself does not mean that you have reason to think the conclusion is
false. It also obviously doesn't mean that there are no good arguments for the conclusion.
Indeed, the arguer herself might well have another, better argument available.
(As an aside, note that a common problem people have when defending something they
believe strongly is that they offer every argument they can think of without considering
which are good ones and which are stinkers. So it is not uncommon to find someone
presenting a poor argument in defense of something even though she has other cogent
arguments available, too.)
Secondly, suppose you decide the argument is cogent. Then you must conclude that there
are good reasons for accepting the conclusion, even if the conclusion is one you don't
find appealing. This doesn't necessarily mean that you have to accept the conclusion to
remain rational. After all, we have seen that a cogent argument needn't always be a valid
argument when the premises provide strong support without guaranteeing the conclusion.
Furthermore, even if the argument is valid, many cogent arguments have premises we can
see to be acceptable even though we can't be certain they are true. Such an argument
again makes the conclusion likely, but not certain.
PHIL 145 LECTURE NOTES 37
Finally, another important point which has been implicit in everything we have said in
this chapter: When you are evaluating someone's argument, it is never enough to merely
reject her conclusion, nor even to offer reasons of your own for a conclusion contrary to
hers. If the conclusion is genuinely something that is not rationally acceptable, then there
must be something wrong with her argument. Evaluating the argument involves either
finding what that wrong thing is, or accepting that the argument is cogent after all.
PHIL 145 LECTURE NOTES 38
Lecture 7
In this lecture we will consider a few aspects of the use of language which are of
particular concern for the analysis and evaluation of arguments.
Most of the topics we will consider are in fact very complex. Indeed, several are the
subjects of extensive research in their own right, and the correct account of any one of
them is the subject of considerable debate. However, we can present a relatively
straightforward account of the points which are important for our purposes that includes
mostly material that would be agreed to by nearly all theorists.
Ambiguity
Vagueness
Loaded language and Euphemism
Definitions
In each case we will consider these topics only insofar as they affect the analysis and
evaluation of arguments.
Ambiguity
It might be surprising to find out that the answer to the question "What is a word?" is
actually the subject of significant dispute among linguists and philosophers of language.
We typically assume that the answer is the one which is somewhat facetiously put by
saying that a word is a string of letters that is written between two spaces (or the sound
corresponding to such a string).
Now, there are various reasons people have for rejecting such an account. One simple
reason is that it seems rather arbitrary to say that 'red' and 'read' are different words, while
'bug' is one word, whether it is used to refer to annoying someone, or to an ant, or to a
listening device. The fact that there are two distinct spellings in the first case but not the
second seems to be nothing more than a historical accident. And should 'read' itself be
one word, even though it is pronounced in two different ways depending on the meaning?
But since this is not a class in linguistics or philosophy of language, we won't investigate
these questions.
Instead, we will only note that some have suggested that what makes a word is a written
symbol (and the corresponding spoken symbol along with it ... the combination of a
written and a corresponding spoken symbol is sometimes called a phonograph, but I don't
think I'll introduce that term with this meaning in a section about ambiguity!) together
with a meaning.
PHIL 145 LECTURE NOTES 39
We point this out because it means that there are different ways to describe the same
phenomena depending on which theory of words one accepts. One can say either that the
word 'bug' has (at least) three meanings, or that there are three different words, all of
which are written 'bug'. The textbook uses the first manner of speaking. In any case, it is
usually easy enough to see which way someone is speaking in any given case, and so to
keep straight what they mean. But it will be useful to have available the idea that when
the same written sign or spoken sound is used with different meanings, then it's really
two different words.
The fact that the same written or spoken sign can have various meanings gives rise to the
phenomenon of ambiguity. If it is not clear in some context which of two or more
meanings a sign has, then the term is ambiguous.
Examples of ambiguity are very easy to come up with. If I say "Watch out! That food is
really hot!" it's not clear whether the food is very spicy or whether I'm telling you
something about its temperature. Puns and various other sorts of wordplay depend on
ambiguity, too.
In some cases ambiguity is very easy to spot. In others it is more subtle, or even where
it's not subtle it can pass without notice: you've probably seen a comedian stand in silence
and wait for people to get the joke when he uses a double entendre. If she hadn't waited,
the second meaning would have passed unnoticed by the audience. And, of course,
people often inadvertently say things open to more than one interpretation, much to their
own embarrassment.
When we consider arguments, unnoticed ambiguity can be the cause of people accepting
flawed reasoning as cogent. For instance, someone might be persuaded by an argument of
this sort:
There is no such thing as empty space. After all, if there is really nothing between
two objects, then they must be right up against one another. But if the space
between two objects is empty, that means that there is nothing between them.
The problem here is that the phrase "nothing between" gets used with two different
meanings here. Once it means "nothing, not even distance", while in the other it means
"nothing but distance".
Arguments which fail to be cogent because of ambiguity are common enough that they
have acquired a special label. An arguer who present such an argument is guilty of
equivocation, and we say her argument commits the fallacy of equivocation. (We also
sometimes say that the argument trades on an ambiguity.)
The fallacy of equivocation is very interesting when you consider it with respect to the
ARG conditions. Any argument which commits this fallacy fails to meet at least one of
the ARG conditions, but just which one it fails depends on how one analyses the
argument, and either of two analyses is usually reasonable, depending on how one
answers the question "what is a word?"
PHIL 145 LECTURE NOTES 40
Consider the sample argument again. One might put it into standardized form as follows:
1. If the space between two objects is empty, there is nothing between those two
objects.
2. If there is nothing between two objects, they must be right up against one
another.
So
3. No two objects could have empty space between them.
Therefore,
4. There is no such thing as empty space.
If you look at the argument as presented, it's not obvious what is wrong with it, though
the conclusion is implausible enough that it seems a safe bet that there's something wrong
with it. But what? It certainly looks like the premises provide good support for the
conclusion. Furthermore, if we consider the premises individually, each looks reasonable
enough, since in each case we are tempted when considering the premise in isolation to
read the term "nothing between" with the meaning which makes it true.
To say what is going wrong we need to draw attention to the fact that when
describing what it means for premises to support a conclusion (e.g., in the definition of
validity) we referred to "all the premises being true at once". But part of what this "at
once" means must be that the key terms of the premises must mean the same thing in all
the premises. But if we hold the meaning constant then one of these premises must come
out false. If we meant "not even distance" by "nothing", then premise one is
unacceptable. If we mean "nothing but distance", then premise 2 is unacceptable. So
when we recognize the need to hold meanings constant when evaluating premises, we can
see that at least one of the premises must be unacceptable, and so this argument fails the
A condition.
On the other hand, if we regard "nothing" as two different words when it has two
different meanings, it is more natural to analyze this argument differently. To see this we
can either rewrite the standardized argument above putting subscripts to make clear that
these are two different words, or we can replace "nothing" in either case by another
expression which has the relevant meaning for each premise. Let's rewrite the first two
premises using the second approach:
1. If the space between two objects is empty, there is nothing but distance
between them.
2. If there is nothing, not even distance, between two objects, then they are right
up against one another.
...
Looked at in this way, it is clear that the inference from 1 and 2 to 3 is not valid, and in
fact these provide no support for the conclusion at all. So the argument looked at in this
way fails the G condition.
One of the things that makes the fallacy of equivocation so interesting and sometimes so
hard to identify is precisely that it is often difficult to pin down exactly what has gone
wrong in the argument. One reason for this is that it is legitimate to accuse such an
argument of failing either the A or the G condition, depending on how one looks at it.
PHIL 145 LECTURE NOTES 41
Vagueness
People sometimes confuse vagueness with ambiguity, but they are really distinct phenomena
and once you have a handle on the distinction you'll be unlikely to mix them up.
Many of the concepts we use words to express have what we might call borderline cases,
which is to say that there are cases for which it is not clear whether or not the concept
applies. Consider colour words, for instance. It is not hard to imagine an item of clothing
for which it is not clear whether or not it should be called red. Is it red enough, or should it
be called pink instead? There doesn't seem to be a correct answer in such a case. (It is very
important to notice that the fact that in some cases there is no clear answer does not mean
that there are no clear cases. My coffee is clearly not red, and the pen on my desk clearly
is red, even though there might be room for doubt about what to say about my shirt.)
So, ambiguity has to do with a single symbol being used with two or more distinct
meanings, while vagueness is a feature of a single meaning that can attach to a word. We
saw above an example where the word "hot" was used ambiguously because it was
unclear whether spiciness or temperature was in question. But notice that the each of
these meanings is vague, too. For instance, my pizza sauce is certainly zesty and zingy,
but is it hot? It strikes me as a borderline case. Similarly, how warm does it need to be
outside before one says it is hot? Some days are clearly hot, some are clearly not hot, and
some are borderline cases.
Vagueness is a pervasive feature of language, and it is often harmless or even useful. For
instance, if we don't know something in precise detail, a report using vague language
might be the appropriate way to accurately express what we know. (Consider an
eyewitness who says "the thief was a rather large man". That can be a true, if vague,
description, and the eyewitness might be in no position to tell us anything more precise
like "the thief was 6'1" and 205 lbs.")
There are also patterns of argument which depend on the illegitimate use of vague terms.
Here is a famous and much discussed example.
Obviously something has gone wrong here. But the argument looks to be valid: putting
one and 2 together, we can clearly infer that a person with one hair is bald. This seems
fine. But now putting that together with premise 2 we get the claim that someone with
two hairs on his head is bald, which also seems okay. But if we repeat this another 9998
times, we get the obviously unacceptable conclusion. Something must have gone wrong!
It turns out, in fact, that it is a very controversial matter saying just what has gone
wrong in this argument. But all the accounts agree that the problem has to do with
applying a form of argument which is perfectly acceptable when applied to precise terms
to a vague term like 'bald'. One way or another, these accounts usually suggest that
premise 2 is not acceptable because there are borderline cases between being bald and not
bald (and so, while adding one hair doesn't make a person not bald, it brings him that
much closer to non-baldness).
Many terms of language carry an emotional charge in addition to their literal meaning.
That is, by using them we not only express a claim with certain content, but also express
our opinion, whether pro or con, simply by a choice of words. For instance, someone
might refer to me as a professor, or as an ivory tower intellectual. The first is relatively
neutral, I suppose, while the second is usually intended as an insult (even though the
people who use it usually apply it to all and only university professors).
Sometimes the "emotional charge" of what someone says doesn't take the form of
choosing a loaded word when a neutral one is available, but instead is achieved by simply
loading a sentence up with adjectives that convey what we might call "editorial
comment" (see the discussion of euphemism directly below!). So rather than saying
"DeVidi is indecisive", someone might say "That moron DeVidi is a weak-kneed, lily-
livered sot who can't make up his mind about anything." The second expresses pretty
much the same content as the first, though it also lets us know that the speaker thinks this
indecisiveness is a very bad thing.
On the other hand, euphemism is the use of deliberately bland terms to refer to something
where a more direct or blunt manner of referring to it would be alarming, embarrassing or
impolite. For instance, if a beating is described as an "altercation", or a huge deficit is
referred to as a "cash-flow problem", someone is trying to whitewash something.
However, when someone describes someone who has just died as having "gone", they are
typically simply trying to be nice.
A good rule of thumb when analyzing someone's argument is to try to replace either
emotionally charged language or euphemism by a neutral sentence which has the same
literal content (presuming you can figure out what that is).
This will make the job of evaluating whether there is anything of substance to what
someone says much easier, and will help you tell when there is nothing but bombastic
rhetoric. It is unfortunately not uncommon for someone to substitute insults (or flattery!)
for argument. Trying to find a neutral way of expressing premises will make this clear.
PHIL 145 LECTURE NOTES 43
And translating euphemisms by more literal statements will often identify arguments
which fail the G condition in a quite dramatic way. When the company accountant says
"there's no need to worry, we're just experiencing a cash flow problem", translating
"cash-flow problem" by a more accurate description could tell us that we need to be very
worried indeed!
Definitions
We needn't try to add much to what is in the textbook in our discussion of definitions.
But it is worth noting what goes wrong in an argument that uses a persuasive definition.
For the most part we are willing to allow people to stipulate what they will mean by a
term in an argument. However, people sometimes do this without admitting to it, often by
suggesting that a real or genuine instance of a certain sort of thing will have certain
properties. In any case, stipulative definitions of terms that are in common use can give
rise to problems if the person goes on to make an argument where the newly redefined
term occurs, but so does the same term in its everyday sense. For instance,
Here we might allow 1 to pass if it is presented by someone who insists that "that's what I
mean by a man!" Fortunately, though, the second premise is not generally true in that
sense (anymore).
It should be clear enough what the problem is with this sort of argument. It is an example
of the fallacy of equivocation. The only difference between it and the earlier case is that
one of the meanings is introduced into the discussion by a stipulation by the arguer.
Whenever someone stipulates a meaning for a word that is in common use, the danger of
equivocation is present. Persuasive definitions are cases where the stipulation itself is not
overt, and so the equivocation is harder to detect.
PHIL 145 LECTURE NOTES 44
Lecture 8
You will want to be sure to read these notes in conjunction with the discussion of these
matters in Chapter 5 of the textbook.
(A) A famous 20th Century philosopher of science named Carl Hempel has used the
following example of a bad argument. It is a good choice of example because to a
modern reader it seems so outrageously bad that it is funny. The argument was made by
the astronomer Francesco Sizi as an attempt to refute his contemporary Galileo's claim to
have seen satellites circling around Jupiter. (We quote the argument as it is presented in
Chapter 5 of Hempel's Philosophy of Natural Science (Englewood Cliffs, N.J.: Prentice
Hall, 1966).)
There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in
the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury
alone undecided and indifferent. From which and many other similar phenomena of nature
such as the seven metals, etc., which it were tedious to enumerate, we gather that the
number of planets in necessarily seven. ... Moreover, the satellites are invisible to the naked
eye and therefore can have no influence on the earth and therefore would be useless and
therefore do not exist.
We will be able to use this argument again in the next lecture when we discuss the
question of relevance, because there are certainly some problems with the relationship
between the evidence Sizi offers and the conclusions he wants to draw from it.
PHIL 145 LECTURE NOTES 45
More important for present purposes, many of the premises of the argument are
things we now have no difficulty seeing to be plainly false (e.g., that there are seven
metals, or the unstated premise that anything that is invisible to the naked eye can have
no influence on the earth). I don't think anyone would have qualms about admitting that
the falsity of these premises is one of their reasons for not being convinced by Sizi's
argument that there can be no moons orbiting Jupiter.
However, some authors thought that Hempel's use of this example was a bit unfair
to poor old Sizi. Sizi's argument seems laughable now, these critics suggest, but the man
wasn't, they contend, a buffoon. With respect to those premises in particular, while they
seem ludicrous to us, they were perfectly reasonable beliefs for someone who accepted
the best scientific ideas of the day (in the part of the world where Sizi lived), say these
critics.
We're not interested in trying to answer the question of whether these critics are
right to suggest that Sizi's argument was not ludicrous, given the circumstances in which
he presented it. For our purposes it is enough to note that when we ask about the cogency
of his argument (and it particular about the acceptability of its premises), there are at least
two questions we might be asking
There is a general point which lies behind this example. When we inquire into the
acceptability of a premise, we are always asking about its acceptability for someone, and
whether a premise is acceptable or not might depend on who that someone is.
However, even if an argument is addressed to someone else, we can always ask
whether we should be convinced by it. And in typical cases that's the question we're most
interested in answering anyway. We will adopt the general rule that, unless otherwise
specified, when we're talking about whether a premise is acceptable or not what we're
talking about is whether we should accept it.
(B) The next two parts of this lecture will discuss two lists of conditions, one for
acceptability and one for unacceptability of premises. We've already mentioned that these
will need to be stated in a pretty general form. Another important limitation of these lists
is that we cannot hope to make them exhaustive. That is, there are things that can make a
premise unacceptable or acceptable which will not be on our lists. We have a reasonably
thorough list, but even if we made the list a good deal longer (and so harder to work
with), that would be no guarantee of completeness.
Before proceeding, a remark about what we should do with premises to which none of the
conditions we will discuss below happen to apply. Well, the first possibility is that we
will see some other reason, beyond those conditions, for accepting or rejecting that
premise. But suppose that's not the case, either. Where does that leave us?
It seems to leave us unable to say whether the premise is acceptable or not
acceptable. So perhaps we should assign some intermediate status to it. But notice what
the effect of this is on the question of whether we should accept the conclusion of the
argument (on the basis of the argument). Supposing that the conclusion doesn't follow
from other, acceptable premises in the argument, it would seem that we can at least say
that we should not accept the conclusion on the basis of the argument.
PHIL 145 LECTURE NOTES 46
It might seem like we should add to this statement that we should not reject the
conclusion, either, since we haven't rejected the premise. But we don't need to add this
phrase, you will recall, since we do not reject a conclusion because an argument fails to
support it even when the argument is transparently a very bad one. So in those cases
where a premise is one where we simply can't make up our minds on whether it is
acceptable or not, the implications of that fact for what our attitude should be toward the
conclusion will be the same as when we have reason to declare the premise unacceptable.
Acceptability Conditions
We will begin with a list of the acceptability conditions. We will then discuss and give
some examples of those conditions where I have something to say which is an addition to
or which differs from what is in the textbook.
Remarks:
• Once again, these conditions are only guidelines, and applying them might require a
good deal of thinking.
• (Except for condition 2, of course...) The fact that a premise satisfies one of the
conditions doesn't show that it is true, only that it is acceptable.
1. There's not much to say about condition 1, except that it is supposed to cover both
cases where a cogent subargument establishes a premise and cases where there is a
cogent argument, presented either by the arguer or someone else, which has been
presented somewhere else.
3. Example: "Every living creature has some kind of reproductive system." Note here
that:
a) being common knowledge doesn't require that everyone knows it. Many
children don't know this to be true, for instance.
PHIL 145 LECTURE NOTES 48
b) once again, arguments are typically expressed with an audience in mind, and
what is common knowledge to one audience (say, fans of the books of Saul
Bellow) might not be to another audience (say, fans of Judith Krantz), so a
premise that would count as common knowledge in some context (e.g.,
"Augie March grew up in Chicago", in a lecture to a group of Bellow fans),
would not count as common knowledge at another.
c) we are discussing conditions for the acceptability of premises, not for their
truth. Merely being a matter of common knowledge, in the sense we are using
that term, doesn't guarantee that something is true. Indeed, an interesting sort
of argument (and quite a common one in philosophy) is one which argues that
a particular bit of "common knowledge" is actually false!
4. Example: "As the impoverished father of three children, I can say that it is hard not to
buy expensive sports equipment for children whose school virtually requires that they
have such equipment." The father is not an "authority" in the sense in which that term
is used here (see below: what he talks about isn't a matter requiring expertise), hence
5 doesn't apply. However, in suitable cases we operate by a version of the principle of
charity: unless we have some reason to doubt the testimony, we should accept it. Note
that there are three different sorts of preconditions on this acceptance:
(A) We have no reason for doubting the person: we don't know him to have been
unreliable in the past, or to be biased in some way;
(B) We have no special reason to doubt the claim; It is not wildly implausible, for
instance.
(C) The case is suitable, which is to say that the claim is the sort of thing that
personal testimony is suitable to establish, e.g., it's a matter of personal
experience.
So, before accepting the claim, we need to consider the subject matter (condition
(c)), the claim (condition (b)), and the person (condition (a)).
Note that the same conditions apply when the argument is not expressed by
the authority, but rather by someone who cites the authority. Indeed, this is a more
typical case. Here we need to consider the same sorts of questions about the
authority. In practice we often end up relying on the testimony of the arguer about
what the expert says in, for instance, some written source. Here we end up applying
both condition 4 and 5, since we need to judge how reliable the arguer is in her
report about what she has read elsewhere. However, if necessary we can usually
check on the reliability of this testimony fairly easily by, for instance, looking up
the cited material at the library in the case of written sources.
6. Consider this argument: Suppose wishes were horses. Then, as everyone knows,
beggars would ride. But since we also know that beggars do not ride, it follows that
wishes are not horses after all.
Now, the premise of concern to us here is the first one, that wishes are horses.
It would be a serious misunderstanding to try to reject this argument on the grounds
that this premise is false (after all, the conclusion of the argument is precisely that
the premise is false!). The arguer is not really asking you to accept the premise, in
the sense of accepting that it is probably true. Instead, as the fact that she starts the
argument with the word "suppose" indicates, she is asking you to grant that the
premise is true, for the sake of seeing what would be the case if the premise were
true. Sometimes in such cases we speak of granting the premise "for the sake of
argument".
There are two especially important types of arguments where this kind of
provisional acceptance of a premise is required. The first, of which the "beggars"
argument is an example, is called a reductio (or, sometimes, a reductio ad
absurdum). Here the goal is to prove that some statement is false. The argument
works by assuming that the statement is true, then showing that it follows from that
assumption that something we know to be false (ideally, something which is not
merely false, but which is absurd, in the sense of being self-contradictory) follows
from that assumption. We then know that the assumption leads to unacceptable
consequences, and so the assumption itself is one we should not make.
The second sort of argument is sometimes called a conditional proof. Once
again, in this sort of argument the arguer is not asking you to accept that the
premise in question is true. Instead, she wants to show that certain other things
follow. Once again, it is essentially harmless to grant the premise "for the sake of
argument", provided we remember that what is shown by such an argument, if it is
cogent, is only that some conditional claim (that is, some claim which would
naturally be expressed by saying "If ..., then ...") is true.
Both these sorts of arguments are rather hard to represent using the
diagraming techniques from the earlier lectures, precisely because these techniques
don't really make room for "suppositions" rather than standard premises. We will
see better ways of handling them when we consider formal logic in lectures 10-12.
PHIL 145 LECTURE NOTES 50
Conditions of Unacceptability
Let's look at some of the conditions for unacceptability of premises. You will notice that
unlike the acceptability conditions, not all of these conditions can be applied to the
premises considered one at a time.
3. Presumably not much needs to be said here: If you can't understand what a sentence
says, (and this is the fault of the sentence, and not simply a matter of some use of
some very technical language by a specialist in some field you don't know anything
about) it's not going to be rational for you to believe it to be true. Of course, if the
arguer is right there with you, you can ask her to clarify what she meant, in which
case she might supply a different, and more comprehensible sentence.
This argument, at premise 1, presupposes that the only reason we might have for
going to university is to get job training. This assumption is controversial and, as
such, unacceptable in the absence of supporting argument. But if it is not accepted
then we must regard premise 1 as unacceptable.
Another thing to notice about question-begging arguments is that in some cases you
will need to consider the context in which an argument is presented to determine
whether an argument is question begging or not. Consider the following argument:
Nothing in physics and chemistry will explain the fact of human imagination.
Yet human imagination exists. So there are things outside the scope of physics
and chemistry.
Considered in isolation, this argument seems to beg the question. It looks as though
the arguer is trying to convince someone who believes that minds are just brains,
and so that all mental phenomena are ultimately just rather interesting sorts of
physico-chemical phenomena, that she is mistaken. But if that is the context in
which this argument is being presented, no participant in the debate would find the
first premise acceptable unless she already found the conclusion acceptable. The
arguer begs the question.
However, the context of this argument may well be more complicated than
this single paragraph shows. That first premise might well be defended by a
(possibly long and complex) subargument. And this subargument might well be one
whose premises do not presume the truth of the conclusion. (Perhaps the arguer
bases his argument on some sort of detailed scrutiny of the kinds of things physics
and chemistry have been found capable of explaining in the past, and noted that
there is nothing like imagination in the list.) In this case we would need to check
whether that subargument is cogent before deciding whether the premise is
acceptable or not, but anyway in that case the argument would not be question
begging.
Thus we can see that there are two contrasting things to bear in mind about begging
the question. The first is that it is alarmingly common, and so you need to be on
guard against it. The second is that one mustn't get carried away by the first point
and start accusing people of begging the question when a little more context would
get them off the hook.
Remarks:
• Notice that we have not discussed "premises not being more certain than the
conclusion". Insofar as this is a problem with arguments, it is of the same sort as
begging the question, in that this is a flaw which makes the argument unsuitable if
one's goal is to convince someone as something. However, this strikes me as
something which doesn't deserve to be listed as a separate unacceptability
condition. If the premise is very controversial, then it runs afoul of condition 4. If it
is not controversial, and yet it is less certain than the conclusion, then it is very
unlikely that the arguer was really trying to convince her audience of the
conclusion, anyway. Which brings us to another point.
PHIL 145 LECTURE NOTES 53
• People typically use arguments to convince people of things, but that's not the only
thing people use arguments for! As it happens, for certain purposes arguments are
quite useful, even though they are question begging or have premises less certain
than the conclusion and so would not be useful if the goal was to convince people
of the conclusion.
An example of this is that the famous philosopher and mathematician
Gottfried Leibniz offers in one of his writings a proof that 1+1=2! Now, it's most
unlikely that Leibniz thought anyone who could read the rather complicated
material that came before this little argument needed convincing that 1+1=2. And
it's very unlikely that he thought the principles he appealed to when doing so were
more certain than this elementary mathematical fact. In fact, his goals in presenting
this argument were of a completely different sort, and so it's wrongheaded to object
to the argument because it has this "flaw". Similarly, it's unlikely that anyone would
try to convince someone that Socrates is mortal by using the old warhorse of an
example of an argument, "Socrates is a man, and all men are mortal, so Socrates is
mortal." The argument "begs the question" (who would accept the second premise
unless she already accepted the conclusion?), but that is only a problem if someone
is trying to convince someone else to accept the conclusion.
PHIL 145 LECTURE NOTES 54
Lecture 9
Relevance
In this lecture we will deal, in a preliminary way, with the relevance condition for the
cogency of arguments. In Lecture 6 we rejected some parts of the way the textbook handles
the ARG conditions, in particular the ideas that irrelevance of one or more premises in an
argument was always sufficient to make the argument non-cogent. We also revised the order
the textbook recommends for the application of the ARG test, suggesting that it usually made
more sense to ask whether the argument is valid before going on to consider whether the
premises are relevant or not. It would therefore make some sense to cover deductive logic
next, and then move on to consider the relevance condition. We do not do so primarily
because we are following the order in which topics are covered in the textbook.
• First, the issue of relevance really arises when we are dealing with non-valid
arguments. If an argument is one you can see to be valid, we need not worry about
the R condition.
• Secondly, as we will see, the fallacies of relevance we will discuss in this lecture are
not common in situations where people can be expected to present carefully thought
out, serious attempts to present a rational case in support of some conclusion. The
fallacies we will look at here are the sorts of things that are likely to occur when
someone is deliberately trying to bamboozle somebody (e.g. in advertising, or when a
politician tosses out a red herring to get people talking about something else besides
an issue with which he is uncomfortable), or where people are engaging in informal
and heated debate, and so cannot carefully scrutinize the arguments being tossed
around. This doesn't mean these fallacies are not worth knowing about, but it is a
good idea to remember where you are likely to encounter them. They are really part
of the "self-defense" part of critical thinking, rather than the part of critical thinking
which could be expected to be much help in academic pursuits, for instance.
For the most part in what follows we shall ignore the idea of negative relevance, and
often we'll say "relevant" when we mean "positively relevant." This is particularly likely
when we are paying attention to the R condition, since; after all, what is required for
cogency of an argument is positive relevance.
1. We noted something in the introduction that is worth repeating again: the question
of relevance doesn't need to be raised if it is clear that an argument is valid. This
should be borne in mind when reading the following remarks.
2. Often when evaluating a premise you will have some reason to question its
acceptability, but will also be certain that it is irrelevant. In such cases the question
of acceptability can be simply forgotten. If the premise is irrelevant (look again at
our definition of relevance!), then it does not support the conclusion in the slightest.
Whether it is acceptable or not, you might say, is irrelevant.
3. Suppose that all of the premises are irrelevant. In that case the argument is dead in
the water. It doesn't matter what one thinks about the acceptability of the premises.
4. One might put this by saying, as the textbook does that irrelevance is a fatal flaw in
an argument, but a little care is needed. The fact that one or more premises are
irrelevant isn't necessarily fatal, since there may be other premises that do all the
work. Irrelevance is a fatal flaw when all of the argument's premises are irrelevant.
5. Recall here what was said in Lecture 6 about linked premises: when premises are
linked, we must consider them as a group when asking whether they are relevant,
rather than considering them one at a time.
A somewhat more subtle point is that by adding suitably outlandish unstated premises
one can make even the most transparently irrelevant point into a relevant one.
PHIL 145 LECTURE NOTES 56
Fallacies of Irrelevance
The fallacies we will consider here are, as will be clear from their descriptions, rather
transparent flaws. Looked at out of context, it is rather amazing that anyone would ever
commit such an egregious blunder, and even more amazing that anyone would ever be
taken in by an argument which is so obviously bad. So I'd like to begin with a couple of
remarks about the red herring fallacy.
In fox hunting, the dogs are sometimes diverted from the hunt by dragging a dried,
smoked, and salted herring (a "red" herring) across the trail. At that point the dogs lose
their interest in the fox's scent and pursue the herring. This happens in arguments too.
One who is in a debate of some sort with another makes a remark that diverts the other
from the issue at hand and causes her to begin debating an irrelevant issue, is said to
commit the red herring fallacy.
PHIL 145 LECTURE NOTES 57
This is a favorite tactic of politicians. It's likely you've seen scenes from Question
Period in the House of Commons where someone commits this fallacy. Suppose, for
example, an opposition politician asks the Prime Minister about some unkept promise from
a recent election campaign (say, for a tax cut "to help working families"). As often as not,
the answer will not have anything to do with the question asked, but will instead raise some
policy of the opposition party which the PM thinks is unpopular ("If the Third Party is so
concerned about working families, why do they oppose a national daycare program?"). The
hoped for effect is that any ensuing discussion will be about the questionable policy of the
opposition party, rather than the government decision not to implement at tax cut.
Notice that in this case the fallacy is a deliberate attempt to divert the debate away from
the issue at hand. Indeed, all of the fallacies we will discuss in any detail in this lecture
are ones which are sometimes called "diversionary fallacies", and so they are all different
versions of the red herring fallacy. However, some of them are common enough types
that they have more specific names of their own.
Often it is not easy to distinguish red herrings from other fallacies of the sort discussed
here; the boundaries are indistinct. Getting the right name is not the most important thing. If
in doubt, call it a red herring. What is more important is to notice that a diversionary tactic
has been introduced and that owing to the tactic irrelevance has entered the argument.
The case of politicians, and of course also of advertisers who make use of diversionary
fallacies, makes clear part of the answer to the question of how it could be that people
could ever offer arguments which commit such gross logical blunders. Sometimes people
do it on purpose to divert people's attention from the issue at hand. But that's only part of
the answer, and it doesn't answer the question of why it ever works!
A second important point is that all the fallacies considered here typically arise in
contexts where one person is challenging the position of another. The power of each of
these fallacies can be grasped when we think of the situation of a third party who is
addressed by the one who commits the fallacy. If you consider again the example from
the House of Commons, the reason it might well work is that while the opposition party
might care about the tax cut, many of the people watching the debate might have very
strong feelings about daycare. Indeed, it really doesn't matter too much to the PM
whether someone reacts by jumping to the defense of the opposition party's daycare
policy. Whether they attack it or defend it, they are talking about something else besides
the original issue.
Very often this seems to be the key to whether an instance of one of these fallacies
is spotted or not. If the topic raised by the diversion is one people care about more than
the original issue, then they will very often not be able to resist the urge to begin
discussing the topic they care about more.
1. Straw Man Fallacy: A few things need to be in place before it is legitimate to say
someone has committed this fallacy. First, they must be trying to refute someone
else's position on some issue. Secondly, wittingly or unwittingly, they must
misinterpret that other position and attribute a view other than the one held by that
person or group to her or them. Finally, they must "refute" the position by attacking
that view which is not the one the person or group holds. This view which is not the
view held by the person or group in question is called the "straw man."
PHIL 145 LECTURE NOTES 58
If you argue in this way, you obviously make a mistake. The premises and
argument you employ to refute the other person's view are irrelevant, since they
bear on the acceptability or soundness of the straw man, not the person's actual
position.
This is quite a common problem with arguments in debates over issues which
are very emotional. When someone disagrees with something which is one of your
deeply held convictions, it is often very hard to see that they could possibly have
any grounds for such an outlandish belief! In such cases it is very common to see
people attributing things that nobody believes to whole masses of people, often
because they do not listen carefully to what another person has to say in support of
her view.
A very common form this fallacy takes is to take an argument in support of a
subtle and qualified conclusion, then to argue against it as though the person
offering it were defending an unqualified claim. For example, consider someone
who is pro-choice on the question of abortion responding to someone, call him B,
who argued for the conclusion that "Abortion should not be legal in cases where it
is simply a matter of convenience for a woman to not have a child." If the pro-
choice person responds by saying "You can't ban abortion because that would result
in enormous misery for victims of rape and incest, not to mention their children,
and what about women whose health is at risk?", he has committed the straw man
fallacy. There are indeed people who advocate a blanket ban on abortion, but the
person he is arguing against here is not one of them. So while there may indeed be
good reasons for rejecting the position defended by B, he has not presented one
with his response.
2. Ad Hominem Fallacy: This is well described in the textbook, and again is easily
seen to be a mistake. It is sometimes called "poisoning the well," where the "well"
is the source, the person who holds a position one wants to reject. Usually, this
attack on the person who holds the position one wants to refute takes the form of
impugning her or his character. "I wouldn't believe what he says about free trade.
He's a convicted wife-beater."
"Don't accept his claim that he saw his mother (the accused) up on the roof
when the murder occurred; there is plenty of evidence that he is more
concerned about the well-being of his mother than about justice."
PHIL 145 LECTURE NOTES 59
In the ad hominem fallacy, the person is attacked and that attack is used to discredit
her beliefs. In the fallacy of guilt by association, a position (not the person who
holds it) is linked with something else of which we may disapprove and that linkage
is used to discredit the position.
4. Other fallacies of irrelevance: There are several other fallacies of irrelevance which
have traditional names. However, I think they are too obviously bad to need much
discussion here, so we will merely mention them. You might want to know the Latin
names out of curiousity, though you won't see them on the exam for this course!
To begin with, the fact that most people are going to vote for the Liberal Party (if
they are) obviously doesn't at all suggest that you should. Earlier logicians used a Latin
expression to label the fallacy embedded in this sort of "appeal to the populace":
argumentum ad populum. Another and similar mistake consists in appealing to force.
One says, in effect, you should believe what I say, or else I'll knock your teeth out; the
fallacy here, in Latin, was called argumentum ad baculum. If instead one appeals to
pity, as in "You should believe me, or else I'll be very unhappy," the fallacious
reasoning, again in Latin, was called argumentum ad misericordiam.
PHIL 145 LECTURE NOTES 60
Lecture 10
You should recall from our earlier discussions that validity, sometimes called deductive
validity, is as good as support between premises and conclusions could be. If an argument
is valid, then we can be assured that the G condition is satisfied, and the R question
doesn't even arise.
Formal logic is the science which studies valid patterns of inference. If an argument
presented by someone in some particular situation follows one of these patterns, the only
question which remains when evaluating the argument is whether or not the premises are
acceptable. During the twentieth century the field of formal logic has grown, and
developments in formal logic have played a fundamental role in the evolution of modern
philosophy of science, linguistics, computer science and mathematics. We are obviously
not going to be able to do more than scratch the surface of these investigations in this
course. But even this much formal logic can be a real benefit for understanding why
arguments are structured the way they are and the sorts of things that can go wrong with
the relationship between premises and conclusions in bad arguments.
• First, we will introduce (or, in several cases, reintroduce) some of the fundamental
concepts of logic and, to get a deeper understanding of them, we will consider how
these concepts are related to one another.
We will begin with a pair of definitions that you have already seen in earlier lectures.
Validity: An argument is (deductively) valid if, and only if, there is no possible case in
which (1) and (2) are both true:
Logical Truth: A statement is logically true if, and only if, it is not possible for that
sentence to be false. (We sometimes call logically true sentences tautologies.)
Logical Falsity: A statement is logically false if it is not possible for that sentence to be
true.
Logical Equivalence: Two statements are logically equivalent if it is not possible for
them to have different truth values.
A simple list of definitions can be intimidating when you first run into it. Let's look at
these definitions a bit more closely.
• When we want to talk about a group of sentences, it is sometimes useful to put them
inside braces (i.e., inside { and }), with a comma between each member of the
group. Here is an inconsistent set of sentences: {Bob has lots of hair, Bob is bald}
These sentences can't be true at the same time. Note that either of them might be
true, and if Bob shaves his head from one day to the next which one is true might
change. But since they couldn't both be true in the same situation, the set is
inconsistent.
• Notice that, of course, we are assuming that 'Bob' refers to the same person in both
sentences. Some authors carefully distinguish between sentences and statements, so
a sentence is something asserted on a particular occasion (so we are talking about a
particular Bob when we make a statement), while sentences can be used on
different occasions to say different things. Rather than wrestling with this
complication, we will use 'sentence' and 'statement' more or less interchangeably in
these notes, and count on the good sense of the reader to have them keep meanings
of words fixed for the duration of examples like this one.
• Notice that a consistent set of sentences doesn't have to be a set of true sentences. In
fact, all the sentences in the set might actually be false. What is required is that they
could all be true at once. So, for instance, here is a consistent set of false sentences:
{Dave is bald, Dave is divorced}. These sentences are both false (I'm happy to say),
but it is possible that they both be true.
• Finally, when we use the word 'group' in the definition of consistency, we have to
allow for groups with only one member.
Notice that logical truth and logical falsity, are quite different from ordinary truth and
falsity. Of course, a logical truth is true, but it isn't like "Dave is married". A logical truth is
not merely true, but it also could not be false. As we've just noted, "Dave is married" could
be false (and indeed it once was). A logical truth would be something like "Bachelors are
unmarried males." That sentence is true, come what may. If all the males in the world were
married, that would mean that there are no bachelors, but it would not mean that there were
married bachelors. Similarly, "Dave is married" is not logically false. A sentence which is
neither logically true nor logically false is sometimes called contingent.
PHIL 145 LECTURE NOTES 62
As an exercise that should help you get a handle on these concepts, decide whether each
of the following statements is true or false, and write down your reasons for saying so,
before you go on to read the explanations below. If it helps, in cases where the claim is
false you might try to come up with a simple example.
True or false?
___________________________________________
Dave is an aardvark,
Therefore,
Dave is an aardvark.
This argument is valid since if the premises are all true (all one of them!), then the
conclusion is true, too. But the premise is false (take my word for it). The definition
of validity tells us that in a valid argument truth of all premises guarantees truth of
the conclusion, but in cases where at least one premise is false we can't tell anything
about the conclusion one way or the other. (This should seem familiar ... it's
essentially why we still need to consider the A condition even in the case of valid
arguments. Valid, you will recall from earlier lectures, is not the same as cogent).
2. False. This should be clear from the previous example, which has a false
conclusion. What do we know if an argument is valid and it has a false conclusion?
Well, since condition (2) from the definition of validity would be true in that case,
condition (1) cannot be true. That is, at least one premise must be false.
3. False again! People sometimes find this one tricky. Once again, an example might
be helpful.
Dave is married.
Therefore,
Dave teaches philosophy.
PHIL 145 LECTURE NOTES 63
Here both the conclusion and the premise are true. However, if, say, the University
of Waterloo was closed tomorrow by the provincial government and I was out of a
job then the premise would be true and the conclusion false. And the definition of
validity requires that it be impossible (and not just false) that all the premises be
true while the conclusion is false.
4. True. Note that the question says the conclusion is logically true, and not just that it
is true. To be valid, it must be impossible that (1) and (2) listed in the definition of
validity be true at the same time. But if the conclusion is logically true, the
conclusion can never be false (look at the definition of logical truth if you don't see
why), and so (2) can never be true. If (2) can never be true, then (2) and (1) can
never both be true. So the argument is valid.
Finally, to make sure you see what is going on in question 4 it is probably a good idea to
see why essentially the same idea can be used to show:
if the premises of an argument form an inconsistent set, then the argument is valid
no matter what the conclusion happens to be.
We mentioned in an earlier lecture that arguments which beg the question are valid, and
they might even have true premises, but they are never of any use if our goal is to
persuade someone of something. We can say something similar about the fact that an
argument with inconsistent premises is automatically valid, i.e. that "anything follows
from a contradiction." These arguments are valid, but they cannot possibly satisfy the A
condition, since it is impossible for all the premises to be true at once. So such an
argument could never be cogent.
There is an important moral to be drawn from these examples. Logic is a study of notions
like validity, consistency, and so on. It can tell us a lot about whether arguments satisfy
the G condition. However, it cannot tell us much about whether premises are true, except
in the special case where they're either logically true or logically false. That is, whether
an argument satisfies the A condition is not a matter of logic, in most cases, because in
most cases the premises of an argument will be contingent.
Another way of putting this is by saying that logic can tell us whether an argument is
valid, but it cannot tell us whether it is cogent.
As exercises, try to figure out why each of the following claims is false, and give a clear
explanation. If it helps, give a counterexample.
So far in our discussions of validity we have used examples where it is very easy to see
whether or not a given argument is valid simply by reflecting on it a bit. In practice,
though, things can be considerably more difficult.
To take an extreme example, start with a formula you have probably seen.
x + yn = zn , with n=2, is an equation describing the relationship between the lengths of
n
the two sides of a right triangle to the hypotenuse, a fact known as the Pythagorean
Theorem. It is well known that there are integers which can be plugged in for x, y, and z
which make this equation true (e.g., 3, 4 and 5). But suppose you consider putting n
greater than 2. It was conjectured long ago (by a mathematician named Fermat) that there
are no integers which make this equation true for any n greater than 2. (In fact, Fermat
claimed to have a proof of this fact, so the claim became known as Fermat's Last
Theorem.) For a long time nobody was sure whether this claim was true or false. After
some centuries of mathematicians trying to solve this problem, a mathematician from
Princeton University, named Wiles, finally came up with an extremely complicated and
lengthy proof that it was true just a few years back.
Now, the reason this is relevant to us is just this: It's pretty well know among
mathematicians what needs to be done to set out a few postulates and definitions to
define the system of natural numbers, and Fermat's Theorem is a statement of number
theory. What Wiles showed, in effect, by constructing his proof, is that the argument with
these postulates and definitions as premises and Fermat's Theorem as conclusion is valid.
And the simple lesson for us is that there can be arguments which even some of the best
minds on the planet can puzzle over for centuries without being able to determine
whether or not they are valid.
The fact that we can't simply tell whether an argument is valid or not by looking, or
by carefully thinking about it, is a big part of the reason that we need a science of
validity.
One thing which makes it very difficult to tell whether or not an argument is valid is the
huge variety of at least superficially different kinds of arguments. Formal logics are,
among other things, methods for considering the class of valid arguments systematically.
Notice that we are using the plural formal logics here. There is no one formal logic
which gives a satisfactory account of all valid arguments. Anyway, even if one did it
would have to be extremely complex. What we want from a formal account of validity is
something that is both reasonably powerful and reasonably easy to work with. The price
we pay for something not too hard to work with is that some validities end up being left
out of the picture.
The first system we will consider in this course is called sentential logic. It is called this
because it considers arguments whose validity depends on the way complex sentences are
composed out of simpler, declarative sentences by using grammatical devices know as
sentential connectives.
This argument is clearly valid. Notice that it would also be valid if the conclusion was
"He was handsome". But what is more important to notice here is that it doesn't matter
what the particular sentences are in the argument. As long as we put declarative sentences
on either side of 'but', then repeat one of the sentences from one of the sides as the
conclusion, the argument will be valid. We don't need to know anything about, for
instance, Boolean algebras, to know that
is valid.
To represent this fact we will use capital letters as sentential variables. This just means that
we will use them as place holders for simple declarative sentences. If we use the same letter
twice, that means that the same sentence is in both places. If we use different letters that
means that these might be different sentences (though they might be the same, too).
Now, it is a fact of the grammar of English that if you have two declarative
sentences, you can get another declarative sentence by putting them on either side of ",
but". And, as we have just seen,
A, but B
Therefore
B
Think back to the first lectures of this course, when we worked on standardizing
arguments. One important thing to do when understanding the argument someone is
presenting to you is to make sure that you reword things so that if an arguer says
essentially the same thing in two different ways, you use a consistent wording to express
that point on all occasions in the standardized version. We need to do this to be able to
not overlook important relationships between premises when we get to the stage of
evaluating the argument. For instance, if part of a complicated premise means the same
thing as something which is expressed in different words in another premise, failing to
see this could lead you to say that a valid argument is invalid. When the time comes to
"translate" arguments into a notation which uses sentential variables, this requirement
will show up again as the requirement that we should use the same sentence letter each
time a sentence with the same meaning shows up somewhere in the argument.
To make a similar but slightly different point, suppose you are hired by some 18th
Century army to calculate trajectories on a battlefield so the cannons can be aimed
accurately. If you are asked "where is this cannonball going to land" and you ask "what
color is that cannonball?", you're going to be out of a job pretty quickly. Why? Because
the color of the cannonball is irrelevant to where it is going to land. Notice that this
doesn't mean that it is irrelevant to everything you might do with the cannonball (a green
one would match the decor of my living room, for instance, while a blue one would not).
Nor does it mean that there is no difference between blue and green cannonballs. It only
means that the differences between blue and green cannonballs are irrelevant to the
purposes at hand.
PHIL 145 LECTURE NOTES 66
Similar phenomena arise for sentential connectives. There are connectives which
are importantly different for some purposes, but which for the purposes of argument do
not differ at all. For instance, the argument patterns described above for 'but' are also
valid for 'and'. For instance,
A and B
Therefore,
B
is always valid. Indeed, all the valid inference patterns for 'but' are also valid patterns for
'and'.
That is not the same thing as saying that 'and' and 'but' mean the same thing. People
say "A, but B", very roughly speaking, to indicate that B is surprising in light of A, while
'and' doesn't usually carry this editorial content. This editorial content, though, is like the
color of the cannonballs, from a logical point of view. What we care about are the logical
properties of these words, which is only part of their meaning. Just as it would be very
inefficient to take into account such properties as color when calculating trajectories,
because they are not relevant, so it would be inefficient to take into account things like
the difference between 'but' and 'and', since they are not relevant when doing logic.
We therefore introduce a single symbol into our logical system, '&', which will play
the role of 'and' and of 'but', when those words are used as sentential connectives. So, for
any sentence letters A and B, "A & B" is a grammatical sentence of our formal system.
That sentence can, of course, be read either as "A and B" or as "A, but B".
What does "and" mean? Well, if I say "Dave is tall and Dave has brown hair", that
amounts to pretty much the same thing as if I had said "Dave is tall," then later had
asserted "Dave has brown hair". To say that the "and" sentence is true is just to say that
the sentences on either side of "and" are true. What we have just seen is that though "but"
means a bit more than this, as far as logic is concerned we can treat "but" as though it
means the same thing.
We are going to call our symbolic language 'SL' for "sentential logic" or "sentential
language." It will include sentence letters and connectives like '&'. An important skill we
need to have if we're going to use this formal system to learn things about arguments that
people present in English is the translation of English into SL.
First of all, remember that we can only use & to join together two statements. So before
we can decide that "A & B" is a reasonable translation of some sentence in English, we
need to decide just which statement each of A and B is standing for in the present case. If
our example is something straightforward like
it's pretty obvious how we should proceed. So, in a case like this we will begin by listing
what each sentence letter stands for, thus
J: Joe is a farmer.
F: Freda is a truck driver.
J & F.
Sometimes, though, the grammar of English sentences isn't so easily mapped onto the
artificially simplified grammar of SL. In English we might well say "Joe and Freda
passed the exam." Here we don't have "and" between two declarative sentences. But that
sentence obviously means the same thing as "Joe passed the exam and Freda passed the
exam". Once we see that, it's clear that we can handle that sentence much like the last
one. Similarly, if I joke about my sports team by saying "We're small but slow", that
could be translated as "Dave's team is small but Dave's team is slow", and so translated
into a sentence of SL using &.
However, you need to be rather careful about this. Don't let the fact that two
sentences of English look like they have the same structure fool you into thinking that
they must be translatable into SL in the same way. For instance, "Joe and Freda are good
friends" does not mean the same thing as "Joe is a good friend and Freda is a good
friend". So when the time comes to translate, it's is important to consider the meanings of
English sentences, and not just their apparent grammatical form.
We have mentioned that for the purposes of logic "and" and "but" can both be translated by
"&". There are many other English language sentential connectives which allow us to do
the same. For instance, the following words and phrases can typically be translated by &:
although
even though
however
in spite of
despite
yet
notwithstanding the fact that
Translating Using v:
Unfortunately, while '&' can take the place of many of the sentential connectives of
English, it won't do for all of them! We need more connectives than that in SL!
Another connective which is very useful is one which allows us to say of two sentences
not that both are true, but instead that at least one of the two is true. We will use "A v B"
to translate sentences of English which mean that at least one of A and B is true.
PHIL 145 LECTURE NOTES 68
What sorts of sentences of English say this? Very often such sentences use the word "or",
or the combination of words "either ... or ...". Think, for instance, what it means to say
"either you will study for the exam or you'll get a bad mark." The speaker here is insisting
that at least one of those two sentences is true, that is she is saying that we can rule out
the possibility that you don't study and you still get a decent mark.
Note that when we say something like this we don't usually mean to suggest that
studying will make it a sure thing that you don't get a bad mark. The claim is just that if
you fail to do the one, you are sure to do the other.
Another English word which is often used with the same intention is "unless". We
might have said the same thing as we said with our example sentence by saying "unless
you study, you will fail" or "you will fail unless you study". It would, for instance, be
quite honest for a professor teaching a course in calculus to say this to most of his
students. Once again, though, you can't accuse him of dishonesty for saying this just
because he knows from observing past classes that even some of the students who do
study will fail. When he says this he doesn't claim that studying ensures passing, only
that not studying ensures failure!
We should note here that there are occasions where "or" and "unless" can be translated as
a connective between two statements even though that is not what it is doing
grammatically in English. This is very similar to a point made about "and". "Joe or Sue
will help" means the same thing as "Joe will help or Sue will help", and if we say "Joe's
hyper unless sedated" we mean that Joe is hyper unless Joe is sedated.
A final point. Some authors on logic contend that in special cases we use "A or B" and
"A unless B" not to mean that at least one of A and B is true, but instead that exactly one
of A and B is true. When we do this we are usually relying on some special feature of the
context to let our hearers know that this is what we mean. The usual example is a line on
a menu in a restaurant which says "For desert you can have pie or cake." It's just not
going to work to try to convince your waiter that since "or" means at least one he should
be willing to give you both pie and cake. It is the fact that the word is being used on a
menu which clues in people who are used to eating in restaurants that the word has a
special "exclusive" sense, in effect meaning one or the other and, by the way, not both!
With "unless", it is usually something about the relationship between the options
that rules out both of them holding at once. "I'll make it to my daughter's concert unless
I'm dead!" Clearly, if you're dead you're not going to be at the concert, so in saying this I
might well be taken to be insisting that exactly one of these options holds. It seems just as
natural in this case to say that I'm just relying on the background knowledge of my
listener, that I just expect her to know that dead people can't go to concerts, and so if I
give an argument which depends on my not being able to do both then "not both" is an
additional unstated premise. (Indeed, in a longer discussion of these points I would argue
that this is always so.)
In any case, you should notice that the usual meaning of "or" and "unless" is that at
least one option is true. So if you are translating what someone says into SL and you are
tempted to attribute an "exclusive" or rather than a standard "or" to them, you should be
able to state some clear reasons for wanting to do so in that case.
However, we can't translate an "exclusive or" into SL yet, even if we want to for good
reasons, because we can't handle sentences which deny things yet. Let's remedy that now.
PHIL 145 LECTURE NOTES 69
Translating using ¬:
We will use the symbol '¬' in front of a sentence of SL to mean "it is not the case that ...".
It is often called a "negation symbol", and it is typically read "not" for brevity, but it is
useful to remember that it means "it is not the case that" to avoid confusion. It should
become clearer why that might avoid confusion as we go along.
The first thing to notice about symbolizing with ¬ is that it is a good idea, if a particular
sentence is stated in negative terms, to translate it by first of all picking a letter to
symbolize the positive statement, then prefacing it with the negation symbol. So, for
"Bob is not smart" you would first of all notice that the sentence means the same thing as
"it is not the case that Bob is smart", and so you would symbolize it:
S: Bob is Smart.
¬S.
It is important that the negation symbol only means that what follows it is being denied.
There are a couple of ways that it is easy to get mixed up about this.
1. The denial of a sentence is not the same thing as the assertion of a contrary of that
sentence. For instance, saying that Bob is not smart is not the same thing as saying
that Bob is stupid. There's lots of open terrain between smartness and stupidity, and
most of us live there! So if you are translating a passage from English into SL and it
is claimed at one point that Bob is not smart, then later that he is stupid, you would
need to introduce two different sentence letters, one for "Bob is smart" (which you
would have to preface with a negation symbol) and one for "Bob is stupid".
2. Once again, you can't be mechanical about translating into SL. You can't, for
instance, go from the observation that "Bob is not smart" and similarly for many
other sentences to the claim that whenever you have "not" qualifying the predicate
in that way you can translate by putting a letter for the sentence with "not" deleted,
then put a negation sign in front. For instance, the negation of "All dogs bark" is not
"All dogs do not bark". The second sentence is actually ambiguous, but it usually
means that each and every dog is a non-barker. But that's not the denial of "All dogs
bark". The denial of that sentence is "It is not the case that all dogs bark", and that is
true if there is even one non-barker among the dogs. So, especially when quantity
terms like "all", "some", "every", "each" and their kin are involved, you need to be
careful about how you symbolize things.
We will consider some more sophisticated tools for handling this sort of case
in Lecture 12. For now, we will simply note that you need to be rather careful about
these cases. If a passage includes the claims that "All philosophers have bad breath"
and "No philosophers have bad breath", you are going to need two different
sentence letters, because neither of these sentences is a denial of the other. If you
want to symbolize the second as a negation, you will want to do the following: Put
A: All philosophers have bad breath. S: Some philosopher has bad breath. Then the
two sentences could be symbolized as A and ¬S.
PHIL 145 LECTURE NOTES 70
Symbolizing with ⇒:
We have one more symbol to introduce to our formal language SL. It, too, is one we can
use to represent many different kinds of English language statement. It is often called an
implication symbol, and we will use that terminology even though we will actually
symbolize a lot of things which sticklers for correctness might insist are not really
implications.
The basic idea we express when we say that A implies B is that if A is true, then B must
be true, too. Notice that this does not commit us one way or the other on the question of
whether A really is true or not. And it doesn't say that B implies A, too. If I say that
snowy weather implies cold weather, I am telling you that if you look outside and see
snow then you can infer that it is cold outside. But that certainly doesn't mean that I'm
claiming that it is snowy. And it certainly doesn't commit me to the obvious falsehood
that if you hear on the radio that it's cold outside then you can infer that it's snowing!
We will write 'A ⇒ B' for "A implies B" or "B is implied by A" and like expressions.
It is useful to have the following terminology: The sentence which comes before the
arrow is called the antecedent, while the sentence which comes after the arrow is called
the consequent. Don't be misled by this terminology ... in learning foreign languages you
will be warned about "false friends", which are words of the new language which sound a
lot like some word of the language you already know, but which have (sometimes
embarrassingly) different meanings. (A common example: the French word actualité
means "current events", or "news". It doesn't mean, as one might first suspect, "actuality",
and to think it does can lead you to serious confusion!)
The word 'consequent' is a potential false friend if you it makes you think of
"consequence". In the example above, "it is cold" would be the consequent, because we
would symbolize the sentence as S ⇒ C, with C standing for "it is cold." However, if you
mistake consequents for consequences, you're likely to get things all mixed up because
you'll think "well, the cold isn't a consequence of the snow ... other way around, if
anything." So, don't make this mistake!
Notice that the arrow is not to be read as saying, for instance, that there is some sort of
causal connection between the antecedent and the consequent. It just means that the
antecedent and consequent are related in such a way that it is legitimate to infer the
consequent if we find out that the antecedent is true. And there are a large number of
different ways we can express the idea that two claims are related in this way in English.
PHIL 145 LECTURE NOTES 71
It is important for the purposes of this discussion to remember what we said above
about "but" and "and". They mean different things, but for present purposes we can
ignore the differences and concentrate on what they have in common. We are going to do
the same for the various sorts of sentence we can symbolize using the arrow. We're not
saying that they all mean the same thing. We're only saying that they all share the feature
of being assertions that two sentences are related so that if you figure out one is true, you
can legitimately infer the other to be true, too.
A very common form of sentence in English which can be symbolized with the arrow is
an "if ..., then ..." sentence. What comes after the "if" is the antecedent, while what comes
after the "then" is the consequent. For instance, we could perfectly sensibly say that
snowy weather implies cold weather by saying "if it's snowing, then it's cold." There are
various simple modifications we can make to this sentence without changing its meaning.
Sometimes the "then" just gets dropped: "If it's snowing, it's cold". Sometimes the order
is reversed: "It's cold if it's snowing." The trick, simply, is to look for the "if". It indicates
an antecedent – but watch out for an "only" before the "if"!
Why? Because there is another connective which you need to be careful to distinguish
from "if ... then ...". The connective "...only if ..." is the converse of "if ... then ...". That
is, to assert that A implies B we can say "if A, then B", but we could equally well say "A
only if B", or "Only if B, A". It is true that it will snow only if it is cold, but it is not true
that it is cold only if it snows. You can infer coldness from snowiness, but not the other
way around.
The relationship between "if" and "only if" can be confusing. My trick for
remembering that "only if" comes before a consequent is this: Many logical systems have
one more connective than ours does, namely a double arrow ⇔. The sentence A ⇔ B
means exactly the same thing as (A ⇔ B) & (B ⇒ A) (which makes a lot of sense when
you think about it). When you read a sentence with a double arrow, you typically say "A
if and only if B". Since the "if" is before a B, the B ⇒ A half is the if. So A only if B
must be the other half, i.e. A only if B can be rewritten by replacing the "only if" by an
arrow. So, "if" is an antecedent indicator, while "only if" is just the arrow written in
English!
We also speak in English about "necessary and sufficient conditions". When we talk
about these we often mean much more than just that one thing implies another, but we do
mean at least that much, so for the purposes of symbolizing in SL we can use an arrow to
represent claims involving necessary and sufficient conditions.
So, if I say that striking a match in a room filled with gasoline fumes is a sufficient
condition for causing an explosion, what am I claiming? Well, at least that if you strike a
match in a room full of gasoline, then there will be an explosion. The sufficient condition
is something which allows you to infer the thing it is a sufficient condition for. So the
sufficient condition can be placed in the antecedent position.
As usual, there are various modifications of the wording of English language
sentences which assert that one thing is a sufficient condition for another. We might say,
for instance, "Giving Dave chocolate is sufficient to get him to like you." What this says
is that being a person who gives Dave chocolate is a sufficient condition for being a
person who is liked by Dave. Which, again, can be symbolized in the same way as we
would symbolize "If you give Dave chocolate, then Dave likes you".
PHIL 145 LECTURE NOTES 72
Necessary conditions are quite a different kettle of fish. It's certainly not necessary to
light a match in a room filled with gasoline fumes to cause an explosion. We could use
dynamite instead, for instance. Sufficient conditions are not always necessary conditions.
Necessary conditions don't need to be sufficient conditions, either. To have a
dream, it is necessary to be asleep. But being asleep isn't sufficient, because sometimes
we sleep without dreaming. However, we do know that if someone is dreaming, then they
are asleep. Which is just to say that if S is a necessary condition for D, then S can
sensibly be put as the consequent of a sentence with D as antecedent. In other words, we
can symbolize the claim that S is a necessary condition for D as D ⇒ S.
There are other English language phrases that indicate implicational relationships, and so
which we can symbolize using the arrow, such as "provided that ...". However, you
probably have the general idea in hand by now.
So far our language SL has a few different sorts of things in it. We have, first, the
sentence letters which are to be used to stand for simple declarative sentences. But we
also have the symbols &, v, ⇒ and ¬. We will add one more symbol below. These
symbols are called connectives.
In the case of &, v and ⇒, this is a very natural name because they connect together
pairs of sentences, two at a time. If we have only two sentences which need to be
connected by one of these connectives, then we can just write A & B because there can't
be any confusion about what is linked with what. But the rules of SL are that these
connectives can only connect sentences two at a time, with the result being another
sentence. But sometimes we will want to be able to symbolize sentences which include
more than two sentence letters. When we do so, we will need to include parentheses or
brackets to show how these sentences are grouped together into sets of two. So it is not
right to write A & B & C, because the rules of SL say that & can only group things
together two at a time. So you will need to write either (A & B) & C or A & (B & C).
There won't always be something in the sentence itself that tells you which of these to
choose (e.g., "He's overbearing and loud and stupid" doesn't really suggest that we should
group one way or another), but choose you must.
In other cases, how you group things can make a big difference. For instance, (A &
B) ⇒ C means something very different from A & (B ⇒ C). The first might translate "If
Bob is Swiss and male, then he will have to serve a hitch in the Swiss army", while the
second would say, giving the same meanings to A, B and C, "Bob is Swiss, and if Bob is
male then Bob will have to serve a hitch in the Swiss army." The second actually claims
that Bob is Swiss, while the first makes no commitment on that matter at all.
So, the lesson is that whenever you have an occurrence of &, v or ⇒, you have to ensure
that it is unambiguous which pair of sentences is connected together by that occurrence.
Matters are slightly different with the negation symbol. We have already seen that ¬A &
B means something quite different from ¬(A & B). The rule we are implicitly working
with when we consider cases like this is that we will read the negation symbol as
negating the shortest sentence it can intelligibly be thought to be negating in the context
in which it occurs. In the first case it makes perfectly good sense to read the sentence as
PHIL 145 LECTURE NOTES 73
one where A is negated, then joined to B by &. In the second case we can't read the
sentence that way, because '¬(A' doesn't make any sense at all. We don't find a
grammatical sentence for that '¬' to negate until we reach the second parenthesis. So we
can tell that in this case what is negated is (A & B). As a slogan we can say that
"negations take the shortest possible scope". What this means is that if a negation symbol
is followed by a left parenthesis or bracket, the sentence negated is the one which ends
with the corresponding right bracket. (And, of course, we need to be sure when we write
sentences of SL that the parentheses and brackets are introduced in matched pairs!)
We have seen that the language SL gains some efficiencies compared to English by
ignoring many subtleties of meaning, and so treating various different connectives of
English as though they were just the same. But there are more efficiencies to be gained,
because we can use our four connectives in various combinations to express things that
can't be expressed with just one connective, and for which English has special purpose
connectives.
An important example of this is the English connective "neither ... nor ...". What does it
mean to say that Bob is neither smart nor stupid? There are actually two natural ways to
look at this:
We might take it as denying that Bob is either smart of stupid. That is, we might notice
that the original sentence means the same thing as "it is not the case that either Bob is
smart or Bob is stupid." On the other hand, many people find that it is more natural to
think of the first sentence as claiming that Bob isn't smart, and Bob isn't stupid (either).
These two different ways of looking at the "neither ... nor ..." sentence would result
in two different symbolizations.
S: Bob is smart.
D: Bob is stupid.
In fact, these interpretations are logically equivalent. We not only can capture "neither ... nor"
without introducing a new connective into SL, we can do it in more than one way.
Some would regard this ability to capture these connectives in more than one way as a sign
that we have an overabundance of resources in SL.
You might have noticed a certain similarity in some of the examples we used in our
discussion of sentences we symbolized with the arrow to some of the sentences we
symbolized using v. After all, isn't the claim that "Bob will fail unless he studies" just the
claim that, for Bob, studying is a necessary condition for passing the course? So, perhaps we
could write the sentences having to do with necessary conditions using v, instead?
Consider our example of sleep being a necessary condition for dreaming. This could be
stated more or less equivalently in English by saying that you are not dreaming unless you are
asleep. And so, given our rules for translating "unless", we would get as a symbolization
¬D v S instead of D ⇒ S.
PHIL 145 LECTURE NOTES 74
As a matter of fact, given certain simplifying assumptions we will mention in the next
part of these notes, these two sentences of SL are equivalent, just as are the two versions
of "neither ... nor ...", and so either would be a reasonable way to translate the claim that
sleep is a necessary condition for dreaming.
This sort of overabundance of options for translating English into SL has to do in part
with the fact that (given those simplifying assumptions just mentioned), we could have
gotten by with fewer than the four connectives we have introduced. Indeed, anything that
can be said using these four symbols can be expressed using negation and any one of the
others. So, if our goal was to develop a really efficient language that could express lots of
valid arguments with the bare minimum of resources, we could have used a version of SL
with fewer connectives. We have not done so because some of the translations in that
case seem very artificial. But it is worth pointing out that this is a judgment call, and
there is a tradeoff between efficiency and "naturalness". We could have added
"naturalness" by adding another connective which means "neither ... nor...", at the cost of
some efficiency.
You can get plenty of practice translating English into SL by doing the exercises on pp.
221-22, 234-36, and 249-50 of the text (in the sixth edition, on pp. 257-59, 266-69, 276-78).
NOTE: The textbook uses a horseshoe symbol ⊃ instead of ⇒, an annoying and hard to
read dot instead of &, and a hyphen - instead of ¬. You might run into those symbols in
other books, too, though they are by now becoming rather outdated. However, knowing
that the symbols just mean the same thing as ours is useful for checking your translations
against the ones for the answered exercises in the textbook. NOTE ALSO that we haven't
talked about truth-tables or "shortened truth tables", so you can't do those parts of the
exercises which ask you to make use of them.
Example: If physics is easier than math, then if John can do math he can do physics.
If John has no sense for the fit between abstract principles and the physical world,
he won't be able to do physics. John can do math, but he has no sense for the fit
between abstract principles and the physical world. Physics, therefore, is not easier
than math.
Reasoning ... i.e., the next paragraph is not yet part of the answer.
We need, first of all, to figure out what sentence letters we need. The conclusion is that
physics is not easier than math, so we'd better be able to symbolize that! But recall that if
a statement is negative we want to symbolize it using ¬, so we'll want a letter to represent
"Physics is easier than math". Conveniently, that is the antecedent of the first sentence.
Hmm, the consequent of that sentence seems to be "if John can do math, he can do
physics", but that's again a complex sentence. So we will need sentence letters for both
parts of that. Okay, in the next sentence we have yet another arrow sentence. The
antecedent runs from "John" to "world", because the comma tells us the consequent is
coming. What about that "and" in the antecedent? Does it indicate two sentences joined
by &? NO, because to say that John has no sense for the fit between X and Y does not
mean the same thing as "John has no sense for the fit between X" and "John has no sense
for the fit between Y". Indeed, that makes no sense at all! Also, that sentence is a
negation, so we'd better be sure to put a non-negated version of it as the interpretation of
PHIL 145 LECTURE NOTES 75
the sentence. So we need one sentence letter for the antecedent of that sentence. Now
what about the consequent. "He won't be able to do physics" is certainly close enough in
meaning to "it is not the case that John can do physics" that we can use the same sentence
letters in both, and we already need one for "John can do physics". And the next sentence
doesn't need any more sentence letters, either.
So, our answer here is going to need to be:
E ⇒ (M ⇒ P)
¬S ⇒ ¬P
M & ¬S
Therefore,
¬E
Remark: If it isn't clear already, note that it doesn't matter what letter you use to stand
for any given statement, as long as you make clear what letter stands for what statement.
However, it can be convenient to choose a letter which helps you keep straight in your
mind what you're using the letter for. Of course, remember when doing this that different
statements must have different letters, so don't use S for two sentences just because S is
the first letter of the key word in each!
Reasoning: First sentence has "provided that" in it, so it'll be some sort of implication,
and what comes after "provided that" will be the antecedent. But that has "neither ... nor
...", so the antecedent will be complex, involving one letter for "swimming stresses the
heart" and one for "Swimming injures joints." No problem for the next sentence ... it's
just a long conjunction. As for the remainder, the only remotely tricky bit is seeing that
"Swimming regularly will improve a person's general health" and "swimming regularly
will improve health" are just two ways of expressing the same idea, and so we can use
one sentence letter for it in each of the last two sentences.
(¬J & ¬H) ⇒ E (Or, if you prefer the other reading of "neither...nor...", ¬(J v H) ⇒ E)
¬J & ¬H
E⇒I
Therefore,
I
PHIL 145 LECTURE NOTES 76
Lecture 11
In this lecture we will carry on with our discussion of SL. So far we have considered
ways to translate certain sorts of sentences of English into SL. In this lecture we will
investigate a couple of different ways for checking for the validity of arguments which
have been translated into SL.
Proofs in SL
Suppose someone presents an argument to you, and you say "Wait a minute, I don't see
why that follows." What is the likely response? Sometimes the arguer will think that she
has simply left "too big of a gap" between her premises and her conclusion, so she will go
step by step: "Well, do you see why this follows from the premises? But then from that it
follows that ... ." Eventually she will try to make sure that there are a series of small steps
leading from the premises to the conclusion, where at each stage it is clear that the step
taken follows validly from what has come before. I short, the arguer will try to fill in the
gap in her argument by giving you a proof, or a derivation, of the conclusion from the
premises.
What we are going to introduce here is a procedure for giving proofs in SL. In the sort of
case described above, the arguer relied on it being obvious to both of you that each step
was a valid one, but otherwise what sorts of steps could be made was left entirely
unspecified. However, just as one of our goals in setting out the language of SL was to
use only a small number of basic connectives, in the name of economy, we are also going
to have only a few basic sorts of rules that can be used in constructing proofs. As it turns
out, even though we have only a few rules it can be proved that for every valid argument
of SL it is possible to prove its conclusion from its premises using our set of rules, so in a
sense only having a few rules in no restriction at all!
First a couple of conventions about how to write down proofs will help us keep straight
what is going on. Each sentence in a proof will appear on a line, and each line will be
numbered. Moreover, off to the right we will write a "justification" which tells us why
what appears on that line is legitimately in the proof. So we will usually begin by listing
all the premises, with numbers, and will write "premise" as the justification.
Example: If you are asked to give a proof for the argument with premises A & B and C
and conclusion A & C, you would begin
1. A&B Premise
2. C Premise
Then we would try to use our rules to get from there to a line which has A & C on it.
PHIL 145 LECTURE NOTES 77
Now, when we present rules of inference, you should understand them like this: the
bottom line of the rule as it is presented is what you can add to a proof provided you have
all the things listed above that line already in your proof.
Now, think back to what we said A & B means. It just means that both A and B are true.
What can you infer from that? Well, obviously you can infer that A is true, and you can
infer that B is true. So our first two rules of inference will be these:
n. A&B A&B
A n, & elimination B n, & elimination
We write the letters in bold here to indicate that there is nothing special about our choice
of A and B to represent the rule. What these rules tell us is that if you have any &
sentence on some line of your proof (it's line n in the statement of the rule), you can write
the sentence from either side of the & on a later line of the proof, and your justification
will say that you used the rule "& elimination", and you applied it to line n.
That's what we can infer from A & B. But what do we need to know before we can
conclude A & B? Well, if A & B means that A is true and B is true, then what we need to
know that A is true and that B is true. In other worlds, the following inference rule is
obviously legitimate.
m. A
n. B
A&B m, n & introduction
What this tells us is that if we have one line of the proof on which one sentence appears
and another line on which a second occurs, we can put them together with & between
them. Notice that our justification for this rule requires that we mention two line numbers.
We can now complete the proof from our earlier example, where we were asked to prove
A & C.
example, redux:
1. A&B Premise
2. C Premise
3. A 1, & elimination
4. A&C 3, 2 & introduction
The rules so far are pretty straightforward. However, they are also a bit weak! We need to
be able to prove things which include other connectives besides &!
Recall next what the ⇒ means. We said in the last lecture that A ⇒B means that, if we
find out that A is true, then we can infer B. Well, that means that the following rule of
inference is obviously acceptable:
PHIL 145 LECTURE NOTES 78
m. A⇒B
n. A
B m, n modus ponens
This is the most famous and most basic inference rule in all of logic. It is so well known
by its Latin name, modus ponens, that you might as well learn it by that name, too. Notice
that, once again, the justification for the sentence we are allowed to add requires appeal
to two line numbers.
Example:
1. A ⇒ (B & C) Premise
2. D&A Premise
3. A 2, & elimination
4. B&C 1, 3 modus ponens
5. C 4, & elimination
The next two rules are ones we discussed earlier in the course when we talked about the
acceptability of premises, and we noted that sometimes we accepted premises conditionally.
We will now look at the formal versions of the two sorts of reasoning we described in that
discussion.
The first sort of case we described were arguments where we were willing to grant someone a
premise as a supposition, provided their conclusion was conditional in form. For instance,
consider someone who argues as follows: "Suppose there is no God. Then the burning bush
didn't speak to Moses. So the Bible contains falsehoods. Therefore, if there's no God, then the
Bible contains falsehoods." The important thing to notice about this argument is that the
conclusion is a conditional statement. The argument does not purport to establish that the
Bible does contain falsehoods, only that if there's no God, then the Bible contains falsehoods,
which is a rather weaker claim. And, of course, the ⇒ was added to SL precisely so we could
symbolize sentences like the conclusion of this little argument.
To represent an argument of this sort, we need some way to indicate that something is being
supposed. We will do it by indenting what is supposed, and all of the lines which depend on
that supposition. If we wanted to represent the above argument using this convention, we
would need to indent the "suppose there is no God" line, of course. But then we also would
need to indent the line about the bush not speaking to Moses (which is inferred from the
supposition), and the line about the Bible containing falsehoods (which is inferred from the
line just mentioned). However, when we get to the conditional claim, we could stop indenting.
If the argument is a good one, it will have succeeded in showing that (whether there is a God
or not) if there were no God, then ... . That conditional claim does not depend on the
supposition. So our rule in SL goes like this:
m. A assumption
...
n. B
A⇒B m-n, conditional proof
PHIL 145 LECTURE NOTES 79
The first line, A, is our supposition. We need to justify every line on a proof, so we
justify it by simply noting that we are assuming it! The three dots just indicate that some
number of steps of the proof intervene between A and B, but anyway one way or another
we correctly prove B, using (or not) the assumption that A. If we can "legally" get to B
after beginning with the assumption that A, then we have shown that if A turns out to be
true, B must be true, and so we can infer that A ⇒ B is true. Notice that in the
justification we list a whole range of lines from the proof, running from the assumption
to the sentence that becomes the consequent of the proved conditional.
I think it is useful to think about assumptions as though you are taking out a loan. You
have a very generous bank which will allow you to borrow whatever assumption you
want. But you do have to pay the loans back before you're through. Each time you make a
new assumption, you need to indent a step. The only way you can pay the loan back, that
is, the only way you can move back one step to the left (i.e. un-indent) is by applying a
rule like conditional proof which allows you to move that way. Note that it is not
uncommon in proofs to see more than one assumption.
Let's do an example that will illustrate this and a few other useful points about proofs.
Suppose you are asked to show that from the single premise (A & B) ⇒ C you can prove
A ⇒ (B ⇒ C).
1. (A & B) ⇒ C Premise
2. A Assumption
3. B Assumption
4. A&B 2, 3 & introduction
5. C 1, 4 modus ponens
6. B⇒C 3-5 conditional proof
7. A ⇒ (B ⇒ C) 2-6 conditional proof
Things to notice about this proof: a) Each time we made an assumption, we indented a
step further. b) We only moved one step to the left each time we applied conditional
proof. c) To finish the proof, we had to find our way back to the original, leftmost
column. It's no use proving that the conclusion you want holds under some assumption ...
or else you could just assume that very conclusion and prove it under that assumption,
and there's not a lot of use in doing that! It's an instance of begging the question, in effect.
MORE SIGNIFICANT POINT: Notice that at lines 4 and 5 we used information from
lines 1 and 2 in our justification. You can always appeal to lines where what you are
currently assuming includes everything that was assumed when that earlier line was
added to the proof. So, at lines 4 and 5 we have made two assumptions, namely the ones
at lines 2 and 3. So anything proved under the assumptions at lines 2 and 3 can be used
under assumption 3. BUT WE COULD NOT at line 7, instead of applying conditional
proof, have put
The problem with this is that line 3 is only proved if you assume B. But that assumption
has been "payed back" by using conditional proof at line 6, so we can no longer use the
things which were acquired while you had B available. (If it helps, think of the things
which occur in the column with B as things you rented with the borrowed money. You
had to give them back, but by working with B you managed to acquire something of
value, namely B ⇒ C at line 6.)
Finally, what sort of strategic thinking would enable you to come up with a proof like this
one? Well, the thing that you want to prove is an arrow sentence with A as antecedent, so
a conditional proof starting with A as assumption is a good bet for a way to prove it. But
to get A ⇒ (B ⇒ C) using conditional proof, you need not only to start by assuming A,
but will eventually need to prove B ⇒ C under that assumption. That's a conditional, too,
so another conditional proof is an obvious thing to try. So you need to assume B. From
there, it's just a matter of noticing that you have to get C from somewhere and doing what
you can to get it.
You will recall that the other sort of proof that required a supposition was something we
called a reductio argument. You will recall that the idea of a reductio argument is that
you are trying to show that something is false. To do so, you say something like "well,
suppose it were true ...", and go on to show that something you already know to be false
follows. It should be clear what the supposition is going to look like in our rule, because
we've just done the same thing in the case of conditional proof. In formal logic, we are
very strict about what counts as something "already known to be false." We insist that
you be able to prove an absurdity, in particular a contradiction, which is to say that for
some sentence A you need to be able to prove both A and ¬A. So, the logical version of
reductio is sometimes called by the longer Latin name reductio ad absurdum. We will
simply call it "reductio," though.
m. A assumption
...
B
...
o. ¬B
¬A m-o, reductio
The form of this argument is very much like conditional proof, and all the same
qualifications apply to its use.
There is one more rule that makes use of assumptions. In natural language arguments you
will often enough run up against people who will argue by presenting you with a
dilemma. For instance, someone might say "Look, fella, you're never going to be rich.
Either you're going to be a philosopher, or you'll quit and try something else.
Philosophers don't make any money, God knows, so if you stay a philosopher you'll
never get rich. But if you do something else, you'll be doing something you're no good at,
so even if it's something that pays other people well you're not going to make any money
doing it. So, either way, you won't get rich."
PHIL 145 LECTURE NOTES 81
Once again, we're not interested in whether the argument is any good. We are
interested in its form. The arguer here starts by asserting an "either A or B" premise. She
then constructs two subarguments. In effect, she does two conditional proofs, showing that
the conclusion ("you'll never get rich", call it C) follows from the assumption that A, and
that C follows from B. Then she concludes that C. So the pattern of the argument is this:
m. AvB
n. A assumption
...
o. C
p. B assumption
...
q. C
Things to notice about this rule: Notice that the premise A v B is not an assumption. The
rule is a very complicated looking one, but it's not so hard to use in practice. You have an
A v B sentence, and you need to prove C. Well, if C follows from A and it follows from
B, then C can be proved. Notice that the rule allows you to get C where you need it,
against the left margin. It is, in fact, another rule that lets you "pay back" the loans you
take out by making assumptions. But notice that the loans you take out, if you want to use
constructive dilemma, must be exactly the sentences which show up on either side of the
v-sentence. Notice, too, that there are three things, in addition to the name of the rule that
you need in your justification: the number of the line on which the v-sentence occurs, and
two ranges of numbers, one for each subargument.
Here is an argument which puts together the constructive dilemma rule and the reductio
rule. Suppose you are asked to show that ¬C can be proved from the premises
A ⇒ ¬B, D ⇒ ¬B, C ⇒ B, A v D. We do this by showing that we can get ¬C both from
A and from D as assumptions. See if you can figure out how to do this before you look at
how I do it in the example:
1. A ⇒ ¬B Premise
2. D ⇒ ¬B Premise
3. C⇒B Premise
4. AvD Premise
5. A Assumption
6. C Assumption
7. B 3, 6 modus ponens
8. ¬B 1, 5 modus ponens
9. ¬C 6-8 reductio
10. D Assumption
11. C Assumption
12. B 3, 11 modus ponens
13. ¬B 2, 10 modus ponens
14. ¬C 11-13 reductio
15. ¬C 4, 5-9, 10-14 constructive dilemma
PHIL 145 LECTURE NOTES 82
This proof is rather long, and at first glance it looks a lot more complicated than it really
is. We used the strategy of employing a constructive dilemma to get our conclusion, and
so we needed to do two subarguments, one assuming A and one assuming D. We had to
get the conclusion, ¬C, from each assumption. Well, how do you prove that something is
not the case? One way is to use reductio, and that's what we decided to do in each case.
And, conveniently, we could get a contradiction pretty readily on each assumption.
Notice that in the second subargument (10-14), we had to assume C, then prove B
and ¬B all over again. In particular, when we wanted to justify line 13, we had to appeal
to lines 10 and 2, not lines 1 and 5 as we did above. The reason is that at line 13 we are
no longer assuming A. That loan has been "paid back", and we are now working with a
different group of "loans", namely D and C.
It is probably not a good idea to become overly reliant on "ordinary language"
versions of these arguments. The benefit of formalization is that the correctness of these
arguments can be seen to be independent of the particular subject matter at hand. But
perhaps one more ordinary language argument will help some students to see what is
going on in this argument. So let 1 be the sentence "If it's snowing, it isn't hot", 2 be "If
it's sleeting, it isn't hot", 3 be "If I have a sunburn, then it's hot." Then 4 is "either it is
snowing or it is sleeting." From 5 to 9 we reason as follows: If it is snowing, then I can't
have a sunburn ... for suppose it is snowing and suppose I have a sunburn: then it is not
hot (since it is snowing and 1 is true) and it is hot (since I have a sunburn and 3 is true). It
can't both be hot and not hot, so I can't have a sunburn. We then reason similarly under
the supposition that it is sleeting and show I can't have a sunburn. Since it's either
snowing or sleeting, I can't have a sunburn.
There are three more rules in our system. The first is just a convenience. Sometimes to
get a reductio or a conditional proof to work, it is useful to be able to rewrite something
that we already know. Obviously, this rule is valid ... if you know A is true, it's harmless
to infer that A is true! So the rule is this:
m. A
A m, repetition
And here's an example of where it is useful. Suppose we want to prove that from the
premise C, A ⇒ C can be proved. A conditional proof is in order.
1. C Premise
2. A Assumption
3. C 1, repetition
4. A⇒C 2-3, conditional proof
(If it seems strange that we should be able to prove A ⇒ C from C, recall what the arrow
means. It just means that if you find out that A is true, you can safely infer that C is true.
Well, if C is something you already know to be true, then you can infer that it is true in
any case ... including the case where you find out that A is true.)
The second one is a rule which at first seems odd, until you remember what v means. A v
B just means that at least one of A and B is true. Well, suppose you already know that A
is true. Then no matter what other sentence you choose, you know that at least one of A
and that other sentence is true (namely, A). So we have the following rule.
PHIL 145 LECTURE NOTES 83
m. A m. A
AvB m, v-introduction BvA m, v-introduction
All of the rules we have considered so far are rules which show up, in one form or
another, in nearly every modern logical system. * The next rule is one which is
characteristic of something called classical logic, and it is justified in classical logic
because of the
Classical logic is sometimes called two-valued logic for this reason, i.e., it assumes that
there are exactly two truth-values, and that each sentence has exactly one of them.
Now, suppose that every sentence is either true or false, and not both. Now suppose that
¬¬A is true. That means that it is not the case that ¬A is true. This means that ¬A is false,
since it has to be either true or false. But what about A, then? Well, if A was false, then it
would not be the case that A, so ¬A would have been true, so we know that A isn't false.
But since it has to be one or the other, that means A has to be true. So if ¬¬A is true,
given the fundamental assumption, A must be true, too. So our final rule is this one:
n. ¬¬A
o. A n, ¬¬-elimination.
We often call this rule "double negation elimination", too, rather than using the symbols.
We briefly mentioned, when discussing vagueness in earlier lectures, that there are some
reasons for doubting that the fundamental assumption of classical logic applies in the
case of ordinary languages. There are important systems of formal logic which do not
make this assumption, notably what is called constructive logic. But while proofs in
systems which don't make this assumption are no more difficult that the proofs in
classical logic, generally speaking, if we don't make this simplifying assumption the
discussion in the next section (where we talk about truth values) would need to be much
more difficult. We will return to consider this assumption and its relationship to
vagueness in Lecture 12.
We have introduced only nine rules here (if you count repetition as a rule), but once you
make the simplifying assumption you can prove that every valid argument that can be
written in the language of SL has a proof of its conclusion from its premises using only
these nine rules. That is, this system of rules is complete, given the simplifying
assumption of classical logic. So, since one of our goals was to have a formal system
which was very efficiently organized, we have to say that this has been achieved.
Now, with a bit of practice these rules are quite easy to work with. But there are lots of
other patterns of argument that are used very often in ordinary arguments. Rather than
adding them as separate rules to our official list, though, you can think of them as what
are sometimes called derived rules of inference. That is, if you see that an argument has
*
Some systems, called relevance logics, try to capture a more restricted notion of validity than the standard
one. They will not have v-introduction or repetition in quite the forms in which they appear here.
PHIL 145 LECTURE NOTES 84
the form of something you've already proved to be valid, you don't need to go through the
trouble of proving it again. You know that the result is provable in SL, even though you
do not actually prove it in SL. Here are a couple patterns of argument familiar enough to
have their own names, and proofs which show them to be valid in SL.
1. AvB Premise
2. ¬B Premise
3. A Assumption
4. A 3, repetition
5. B Assumption
6. ¬A Assumption
7. B 5, repetition
8. ¬B 2, repetition
9. ¬¬A 6-8 reductio
10. A 9, ¬¬-elimination
This is a somewhat lengthy proof. Strictly speaking, every time we run into a case of this
sort, we would need to run through the whole proof. But, for practical purposes, it is
enough to know that we could give a proof of this sort if we needed to and can simply
appeal to the derived rule of disjunctive syllogism when giving "proofs".
Here's another, simpler example. See if you can find a proof of your own before you look
at mine: (Hint: to prove a negation, reductio is often the best way to go.)
1. P⇒Q Premise
2. ¬Q Premise
3. P Assumption
4. Q 1, 3 modus ponens
5. ¬Q 2, repetition
6. ¬P 3-5, reductio
m. A
A m, repetition
___________________________________
n. A&B A&B
A n, & elimination B n, & elimination
___________________________________
PHIL 145 LECTURE NOTES 85
m. A
n. B
A&B m, n & introduction
___________________________________
m. A⇒B
n. A
B m, n modus ponens
___________________________________
m. A assumption
...
n. B
A⇒B m-n, conditional proof
___________________________________
m. A assumption
...
B
...
o. ¬B
¬A m-o, reductio
___________________________________
n. ¬¬A
o. A n, ¬¬-elimination.
___________________________________
m. AvB
n. A assumption
...
o. C
p. B assumption
...
q. C
m. A m. A
AvB m, v-introduction BvA m, v-introduction
___________________________________
PHIL 145 LECTURE NOTES 86
For exercises, look at the list of valid argument forms listed on p. 244 (p. 279 in the sixth
edition) of the textbook. All of these are either rules of SL or are provable as derived
inference rules in SL. Bearing in mind that the textbook uses a dot for &, ⊃ for ⇒, and -
for ¬ (and so needs to add superfluous brackets when two negation signs appear one after
another), give proofs in SL of several of these rules.
Examples:
1. PvP Premise
2. P Assumption
3. P 2, repetition
4. P Assumption
5. P 4, repetition
6. P 1, 2-3, 4-5 constructive dilemma
1. ¬(P v Q) Premise
2. P Assumption
3. PvQ 2, v Introduction
4. ¬(P v Q) 1, Repetition
5. ¬P 2-4, Reductio
6. Q Assumption
7. PvQ 6, v Introduction
8. ¬(P v Q) 1, repetition
9. ¬Q 6-8, Reductio
10. ¬ P & ¬ Q 5, 9 & Introduction
Proving Invalidity
We have just looked at a system of rules for SL which gives us the ability to show that
certain arguments are valid. Every valid argument which can be represented in SL has a
proof using only the nine official rules we listed. Of course, the fact that such a proof exists
doesn't necessarily mean that we're going to be able to find it! Sometimes figuring out what
the proof is can be rather difficult! But it is our limitation, and not a limitation of the rules,
which prevents us from filling in the gaps in an SL argument if the argument is valid.
But what about showing that an invalid argument is invalid? It's not enough to simply
say, "hey, I couldn't prove the conclusion from the premises", because there are two
possible explanations for that: (1) the conclusion really isn't provable from the premises;
(2) I'm not clever enough to find the proof, but there is one. So what can we do?
The way to answer this question is to begin by considering again what it means to say an
argument is valid. An argument is valid if there is no possible case in which both (1) all
its premises are true; and (2) its conclusion is false. We can put the point equivalently as
follows. First, we define a counterexample for an argument to be a case in which all the
premises of the argument are true and the conclusion is false. We can then say that an
argument is valid if, and only if, it has no counterexamples.
PHIL 145 LECTURE NOTES 87
It follows that to show that an argument is invalid, what we need to do is show that there
is a situation in which all the premises are true and the conclusion false – we need to find
a counterexample. What we are about to describe is a systematic way of discovering a
counterexample for an argument of SL, if one exists.
The method we are about to describe relies on the fundamental assumption of classical
logic, which, you will recall, is that every sentence is either true or false. And, you will
recall, SL is a system of logic which is designed only to capture those logically valid
patterns of argument which depend on the properties of sentential connectives like 'and,'
'but,' 'implies,' 'or,' 'unless,' 'neither ... nor ...', and so on. We haven't yet said in much
detail what we mean by saying we are considering connectives "like" these. We have said
so far is that the things which are "connected" by these connectives are declarative
sentences which can be either true or false. But there are other sentential connectives
which are not of the same sort as 'and' and its kin.
What is special about these connectives is that they are, in a special technical sense,
functions. This is a term borrowed from mathematics, where a function is something like,
say, +, where when you feed in values (in the case of +, you feed in two numbers), you
get output (another number, the sum of the input numbers). What makes it a function is
that the value of the output is completely determined by the value of the input (and that
every relevant sort of input generates some output). We can think of &, v, ⇒, and ¬ as
functions. What you feed in are truth values (i.e., either true or false), two each, in a
particular order, as input for the first three, and one for the last. You get another truth
value as output, and this output is entirely determined by the input. So SL is often called
truth functional logic and valid arguments of SL are called truth functionally valid. (In a
more thorough introduction to logic, we would go on to prove that every two-valued truth
function can be expressed in terms of our small stock of connectives, so the small size of
our formal vocabulary doesn't limit our ability to express truth functional arguments.)
The cases (or situations) we must consider determining whether an argument of SL has a
counterexample just possible combinations of truth values for the sentence letters that show
up in the argument. If one of these possible combinations makes all the premises true while
the conclusion is false, that shows that the argument is not truth functionally valid.
First, notice that every sentence of SL has either no sentential connectives (in which case it
is just a capital letter), or it has one connective which is the "main operator" of that
sentence. That is, there is always one connective which needs to be given highest priority
when figuring out what the sentence means. That is, if what we are dealing with is a
grammatical sentence of SL, that sentence must be either the joining together of two other
sentences by one of &, v, or ⇒ (though each of these sentences could be very complicated
itself), or it is a single sentence which has been negated, i.e. it has had ¬ put in front of it. If
all the connectives are two-place connectives, it will be obvious which are the two
sentences that are joined by the main operator – the parentheses tell us all we need to know.
PHIL 145 LECTURE NOTES 88
In
((P v Q) v R) & S
for instance, the (P v Q) v R forms one sentence, and that sentence is connected to S by
&, which is therefore the main operator.
It can be trickier if negations appear in a sentence, until you get the hang of things.
Compare
¬(A v B) ¬A v B
Here the first sentence is a negation of (A v B), and so the main operator is ¬. In the
second it is only A that is negated, and ¬A is linked to B by the v, so the main operator of
this sentence is v. Remember that we are reading negation signs so they have the shortest
possible scope. So a sentence is a negation, i.e. has a negation sign as its main operator,
only if the shortest possible scope for that sign is the whole sentence.
Now, we need to consider when sentences with each sort of operator are true and when
they are false. But that's not hard to do, given our assumption that every sentence is either
true or false.
&: What does A & B mean? You will recall that it means that both A and B are true.
So we will want that sentence to be true if and only if both of A and B are true. If
one of them is false, then the sentence joining them together with & must be false.
So A & B is true if both A and B are true, and it is false otherwise.
v: This one is easy, too. You will recall that an v-sentence means that at least one of
the two sentences joined together must be true. So A v B will be true if at least one
of A and B is true, and it will be false otherwise, i.e. it will be false if both A and B
are false.
⇒: This is a bit trickier. A ⇒ B says that A implies B, i.e. that if you were to discover
that A is true, then you could infer that B is true, too. Clearly this tells us that one
way to discover that the arrow sentence is false is to find out that the antecedent is
true and the conclusion is false, since that is exactly what the arrow sentence is
ruling out. And really, if all we were doing is trying to capture the meanings of
"implication", that's pretty much all we could say.
However, we can say more here because we are making the fundamental
assumption of classical logic. We know, by that assumption, that A ⇒ B has to be
either true or false, whatever truth values the antecedent and consequent get. It also
seems natural, given that we have to assign one value or the other to the arrow
sentence, to say that the arrow sentence is true if both the antecedent and
consequent are true. But it is hard to decide what to say about the truth value of
implication statements when the antecedent is false. The meaning of implication by
itself doesn't seem to give us any reason to systematically say one thing or another
about the truth values of such sentences. (Another way to say this is that
implication, as a concept in ordinary language, just doesn't seem to be a truth
function.)
PHIL 145 LECTURE NOTES 89
¬: This one is easy. Since ¬A says that it is not the case that A, it will be true if A is
false, and it will be false if A is true.
We can summarize these definitions in the form of the following characteristic truth
tables, which tell us everything we need to know about the behaviour of the connectives.
T T F T T T
T F F F F T
F T T F T T
F F T F T F
The two columns on the right tell list all possible combinations of truth values for A and B.
Under the operator/connective in each of the other columns, the truth value for the
complex sentence is listed for each of the possible combinations of truth values for the
two simpler sentences. Be sure you understand why this table corresponds to the
definitions above. Pay special attention to the column under the arrow.
With these definitions, and given the grammatical rules for constructing statements in SL,
the truth value of any complex statement is determined by the truth values of the
statement letters which occur in it. For example, consider the statement (A & B) v (D v
C) if A, B, and C are all false, but D is true. (Perhaps it would be a good idea to try to
calculate this for yourself before reading on ...)
*
Of course, this also makes it easier for conditional conclusions to be true. So the downside of this
decisions is that while we won't get "false positives" when implications show up in the premises, we do get
them when the conclusion is an implication sentence. So if an argument with an arrow sentence as its
conclusion is truth functionally valid, we need to remember that what is proved is only a sentence which
says, in effect, either the conclusion is true or the antecedent is false.
PHIL 145 LECTURE NOTES 90
First, since A is false, A&B is false. However, D is true while C is false: now, D is in the
position of A in the table, while C is in the B position, so the second column is the
relevant one. Under v in the second column, we find the value T. So D v C is true. A & B
is in the A position of the whole sentence, while D v C is in the B position. The first is
false while the second is true, so it is the third column which is relevant. Under the v in
the third column, we find an T, so the entire statement is true in this case.
It is worthwhile to practice this a bit. Be sure you can calculate the truth values of the
following statements in the case where A is false, B is true, and C is false.
1. A ⇒ (B ⇒ ¬ A)
2. (A ⇒ C) ⇒ (B ⇒ C)
3. (A ⇒ A) ⇒ (¬A ⇒ ¬ A)
You should get the values T, F, T for these three statements, given those values for the
statement letters.
Our process for figuring out whether or not there is a counterexample which shows that a
given argument is truth functionally invalid is pretty straightforward. It is sometimes
called "dead reckoning" (though explaining exactly why would require more expertise in
navigating ships than I have). It is also sometimes called "fell swoop," since it involves
answering the question "Is there a counterexample for this argument" in one fell swoop.
Let's work through an example while we describe the procedure in general. Consider the
argument with the premises P ⇒ Q and Q ⇒ R, and the conclusion P ⇒ R.
Step 1: We begin by listing all the premises and the conclusion across the page. What
would count as a counterexample? A counterexample would have to be a case in which
all the premises are true while the conclusion is false. So we shall try to construct such a
case. To indicate that the premises are all true and the conclusion false in the case we are
trying to construct, we will write a T (for true) under the main operator of each premise
(if there is one, otherwise we write it under the sentence letter), and we write an F under
the main operator of the conclusion (if there is one ...). (So we can later check our work,
it is useful to write a 1 above each such sentence of these operators, to indicate that we
put these Ts and Fs in place as the first step in our process. Once you have enough
practice to be confident in your ability to do dead reckoning, you don't need to write in
these numbers.)
Example of step 1:
1 1 1
P⇒Q Q⇒R P⇒R
T T F
Step 2: Now we can use the rules summarized in the characteristic truth tables to find out
certain things which must be the case if the premises and conclusion have the truth-values
that they do. If we find out something which must be the case, we fill in the appropriate
truth values, and number the results as having occurred in the second step.
PHIL 145 LECTURE NOTES 91
Example continued:
In this case we obviously have to look at the rule for the arrow, since all three
sentences have that as its main operator. Notice that if an arrow sentence is false,
that tells us what the truth values of the antecedent and consequent must be. (If the
arrow sentence is true there is more than one possibility, so it is going to be least
complicated to do a false arrow before a true one.) We fill in the values for the
conclusion – in this case P is true and R is false. Since we are attempting to
construct a single case, those sentences letters must have the same values where
they occur earlier in the argument, too.
2 1 1 2 21 2
P⇒Q Q⇒R P⇒R
T T T F TF F
Step 3: Repeat this process as often as necessary. Eventually you will either entirely fill
in truth values for all the sentence letters, or you will reach a point where some sentence
must be both true and false.
Example Continued:
In this case, we can reason as follows: for the first premise to be true, since the
antecedent is true, the consequent must be true (check the truth table under the
arrow if you don't see why).
2 1 3 3 1 2 2 1 2
P⇒Q Q⇒R P⇒R
T T T T T F T F F
But the antecedent of the second premise must also be false, since the premise has a
false consequent, and yet is supposed to be true!
2 1 3 3/4 1 2 2 1 2
P⇒Q Q⇒R P⇒R valid!
T T T T/F T F T F F
If we get to a situation where some sentence is forced to have two different truth values at
once, we know it is impossible for all the premises to be true and yet the conclusion false.
(Remember, the fundamental assumption is that every statement is either true or false, but
not both.) The argument is therefore valid. If instead we manage to fill in values for all
the letters without running into such a conflict, the result is that we have discovered a
case in which all the premises are true and the conclusion false. So the argument is (truth-
functionally) invalid.
Example: Another example, this time of an invalidity. Suppose our argument has
premises Q v R and P ⇒ Q, and the conclusion is P v R. (You should be able to figure
out how I worked through this example by looking at the numbers at the top, which show
what happened at each stage):
PHIL 145 LECTURE NOTES 92
312 2 1 3 212
QvR P⇒Q PvR invalid!
TTF F T T FFF
Since we are trying to construct a counterexample, we try to make all the premises true and
the conclusion false. But a false v sentence such as the conclusion must join together two
false sentences, thus step 2 makes both P and R false. But then Q has to be true, or else the
first premise would be false. But in this case it is just fine for Q to be true, since the second
premise is true anyway, because P, its antecedent, is false. We have thus successfully
constructed a counterexample, so we have shown this to be an invalid argument.
Finally: Sometimes you will reach a point where you are not forced to assign any
particular value to any particular sentence in an argument. This is unfortunate when it
happens because it makes things more complicated, so you should be sure that you really
are in this situation before you take the step about to be described. In such a situation you
will need to do two different columns of truth values. In other words, you will need to
pick some letter or connective, and assign it two different values, and see what happens
in each case! Here is an example:
2 1 1 212
P⇒Q Q⇒S PvR
F T T FFF
The problem in this case is that we can't determine what value either of Q or S must have
at this stage. The first premise is true, whatever value Q gets, and thanks to the fact that
its antecedent is false. Similarly, all we know about the sentence letters in the second
premise is that they need to combine to make the whole thing true. So, we must choose
either S or Q and assign both values to it, on separate lines. Let's use Q for this example.
2 1 3 3 1 212
P⇒Q Q⇒S PvR
F T T T T FFF
F F
3 3
Now we must proceed as before, though we are now really doing two separate
"reckonings" at once. Notice that I have introduced a second numbering at the bottom. In
the present example there is only one more step to make, but in longer cases it might be
necessary for us to do things in different orders on different rows. This can make it useful
to have two sets of numbers, so our readers, including our own later proofreading selves,
can tell what we are up to!
Anyway, to complete this example, if Q is true, S has to be true (since Q ⇒ S is
true), while if Q is false, S can have either value and still make the premise true. So we
can complete the example as follows:
2 1 3 3 1 4 212
P⇒Q Q⇒S PvR
F T T T TT FFF invalid
F F T
3 3 4
(or we could have put an F as the value for S in the final row).
PHIL 145 LECTURE NOTES 93
In very complicated arguments, it can happen that you need to do more than two rows for
a single argument. In this case it can become rather messy keeping track of numbers and
so on, but it is a good idea to do so if you can because it will make it much easier for you
to check your work. In general, remember that it is a very good idea to find values which
are forced to be the case by what is already written in, as this will keep the number of
distinct rows you need to keep track of to a minimum.
To practice using dead reckoning, I would recommend using the technique on the
arguments listed in part A of exercise 4 on pp. 248-49 of the text (p. 284 of the sixth
edition). Remember that the textbook uses different symbols from the ones used in these
notes. If you find out that an argument is valid, see if you can construct a proof of the
conclusion from the premises using the rules of SL. You should find that dead reckoning
shows that three of the ten arguments are invalid.
PHIL 145 LECTURE NOTES 94
Lecture 12
In Lectures 10 and 11, we have looked at the many arguments you might run across
which are valid simply because of the involvement of sentential connective such as '...
and ...', 'either ... or ...' 'it is not the case that ...', 'neither ... nor ...' and so on. However, we
emphasized in Lecture 10 that any particular system of logic will only capture some of
the valid patterns of argument. And SL, while very powerful for such a simple system,
leaves out valid arguments like this one:
B, therefore S
which you should have no difficulty convincing yourself to be invalid. That SL misses
such valid arguments is part of the price we pay for its simplicity.
However, suppose we have translated some argument into SL, and then determined that it
is truth functionally valid. Then, assuming the original argument is one we can
legitimately symbolize in SL (we will talk more about what this qualification amounts to
at the end of this part), the original argument is not just truth functionally valid, but is
also logically valid.
So, the fact that an argument turns out not to be truth functionally valid doesn't guarantee
that it is not logically valid. However, if it is truth functionally valid, then it is logically
valid.
In this part, we will briefly consider some of the basics of predicate logic. The price we
pay by doing predicate logic instead of sentential logic is a little extra complexity. The
benefit is that we gain an enormous increase in the number of valid inferences we can
represent in the system.
Predicate logic as a whole is in fact a very powerful system. Since this isn't a course in
formal logic, we can't take the time to go into predicate logic in detail, though. We are
going to stick with a rather weak fragment of predicate logic, which nonetheless allows
us to reason about properties or categories or classes of objects. However, the tools we
introduce to do this are essentially the ones which we need to do all of predicate logic.
The main restriction is that we are not going to have special symbols for relations
between objects — so we won't be able to represent all the reasoning you might do about
'x loves y' and so on. But by knowing how to reason about properties of objects using the
tools we will use here, you will be well prepared if you ever go on to take a formal logic
course such as Philosophy 240 or 341.
PHIL 145 LECTURE NOTES 95
A predicate in the sense we will be using here is what results when you begin with a
declarative sentence, e.g. 'Dave is a male', and replace one or more occurrences of one
singular term by blanks. So '... is a male' is a predicate. So is '... is tall', which we can get
by removing the singular term 'the CN Tower' from 'The CN Tower is tall.' The thing
about predicates which makes them interesting is that we can take singular terms other
than the one removed in constructing the predicate and drop it into the space to get a
result which is a sentence which can be either true or false. "The CN Tower is a male" is
false (I guess), and 'Dave is tall' is true. If a particular singular term makes a predicate
true, we say that term satisfies that predicate.
This simple distinction is behind the fundamental difference between SL and predicate
logic. In SL, the smallest grammatical unit we could deal with was a declarative sentence,
and the only part of the meaning of sentences we cared about was their truth value. In
predicate logic we will be able to consider more of the structure of sentences, and so will
be able to consider more than just whether they are true or false — we will be able to
consider some of what goes into making them true or false.
We will use the name PL for both our language and our system of rules in this part, much
as we used SL for both in the previous two parts.
Instead of using ' ...' to indicate a blank in a predicate, we will use variables. As variables
we will use small italic letters from the end of the alphabet, usually x, y, z, or w. So, we
write 'x is tall' instead of '... is tall'.
Next, instead of writing out the predicates in full, we will use a capital italic letter to
stand for the predicate, and we will follow it by the variable indicating the blank. So we
write Px, Ty, Az, and so on, for predicates.
Finally, instead of writing out singular terms in full, we will use small letters other than x,
y, z and w to replace them. So if we are using d for 'Dave' and Tx for 'x is tall', Td would
be 'Dave is tall.'
We will use all the logical connectives from SL, and we will give them the same
meanings they had in SL (so predicate logic is, in a sense, built on top of sentential logic).
So, to translate the sentence "If Barb is a cop, then Barb is tall" into PL, we need to
specify meanings for the formal predicates and singular terms we will use, much as we
needed to specify which sentence was represented by each sentence letter in doing
translations into SL in Lecture 10.
PHIL 145 LECTURE NOTES 96
b: Barb
Tx: x is tall
Cx: x is a cop
Cb ⇒ Tb.
But the real benefits of PL are due to the fact that in PL we can introduce quantifiers. For
one example of why quantifiers are useful, notice that in the course of our reasoning
about logic, we have been using arguments which cannot be represented in SL but which
are clearly logically valid. For instance, in introducing dead reckoning, we noted that to
falsify the claim that something is valid, which is just the claim that there do not exist any
circumstances in which ..., we needed to show that there is at least one circumstance
which .... To represent the reasoning involved in seeing that these two claims contradict
one another, we introduce two symbols with particular meanings:
To use quantifiers when we are translating from English into PL, we need to answer the
questions, "For any what?," and "At least one what?" That is, we need to specify what
group of things we are talking about. We will call the group of things we are talking
about the universe of discourse, which we often abbreviate as UD. From now on,
whenever we translate from English into PL (or when we offer an interpretation of some
formal sentence of PL), we must specify a UD so we know what the quantifiers mean.
So, using our earlier interpretations of Tx and Cx, we could specify that
UD = people
and then (x)Tx would say that 'For any person x, x is tall', i.e., it says that all people are
tall. Similarly, (x)Tx says that there is at least one tall person.
Bob is blonde,
Therefore,
Somebody is blonde.
We first put
UD = people
b: Bob
Bx: x is blonde.
Then we can symbolize the argument as Bb, therefore, (x)Bx. As we will see, this is a
valid inference in PL.
*
The upside-down A and backward E are traditional. They go back to the days when typesetters had a
limited stock of characters. They chose A for 'all', E for 'exists,' and flipped them around so that there was
no chance of confusing the quantifier x with the predicate Ax.
PHIL 145 LECTURE NOTES 97
The scope of an expression like '(x)' or '(x)', which we call a quantifier expression, is
simply the part of the formula in which it occurs that the quantifier expression applies to.
Just as with negation signs, we interpret quantifiers to have the shortest scope they can
have that is compatible with the formula as a whole making sense. In practice, what this
amounts to is that if a quantifier expression is followed immediately by a bracket or
parenthesis which is not itself part of another quantifier expression, the scope of the
quantifier expression ends at the partner of that parenthesis. If it is immediately followed
by another quantifier expression, then its scope ends at the same place the scope of that
other quantifier ends. If it is followed by a predicate letter, the scope ends at the end of
the string of variables that follows that predicate letter. If it is followed by a negation
sign, its scope ends at the same place the scope of the negation ends. And it can't be
immediately followed by anything else.
So, the scope of (x) in (x)Px is the whole formula. In (x)Px v Pa it is again (x)Px.
However, in (x)(Px v Pa) it is the whole formula. And in [(Pb v (x)Gx)⇒((x)Px v
Da)] it is, one more time, (x)Px.
The reason this matters is that the you will want to pay attention to scope of quantifiers
when figuring out what certain sentences of PL mean. For instance, (x)Px & (x)Qx
means something very different from (x)(Px & Qx). The first says that there is
something which has P and there is something which has Q. It is a conjunction. The
second statement says that there is something which has both P and Q, and so is an
existential claim. So, if Px means "x is odd", and "Qx" means "x is even", and we are
talking about the natural numbers, the first is true while the second is false. The point
about scopes is that since all three occurrences of 'x' in the second formula are in the
scope of the same quantifier, that existential quantifier is saying two things about a single
individual. In the first, there are two distinct quantifiers, and so what is said is compatible
with two distinct individuals having the mentioned properties.
One important thing to notice about quantifiers is that they will allow us to represent
relationships between negation and claims about "all" or "at least one". These, as we
pointed out in earlier lectures, SL is helpless to deal with.
If (x)Px means that there is at least one object that has P, whatever P happens to be,
what is the denial of that claim? Well, one way to express it is to say that the denial is
that "it is not the case that at least one thing has P". But that means just the same thing as
"everything fails to have P". That is, we can represent the denial of this sentence in two
equivalent way, namely as
¬(x)Px (x)¬Px
Notice that in the first version, we have a negation sentence. The sentence just says that it
if false that ... . In the second, the sentence says something about every x, namely that
each and every x fails to have P.
PHIL 145 LECTURE NOTES 98
It is important to notice that each of these claims is different from the one which is made
by a sentence like this one: (x)¬Px. That sentence says that there is at least one thing
which fails to have P. So, if at least one thing doesn't have P, then it is not the case that
everything has P, i.e.,
¬(x)Px is true. Indeed, these two formulas are also equivalent ... if it is not true that
everything has P, then there must be something which does not have P.
It is also important to see the differences between these arrangements of quantifiers and
negation signs. The first set says something about every object, namely that it fails to
have P, so it could represent the claim that nobody is perfect. The second would then be
the much weaker claim that there is somebody who is not perfect. So it is possible for
both these claims to be true at once!
Notice, also, that we can, so to speak, flip a negation sign over to the other side of a
quantifier, provided we change the quantifier from 'all' to 'at least one', or from 'at least
one' to 'all'.
Categorical Reasoning:
We now have in place all the tools we will need. In the present section, we will look at
the relationship between four very common sorts of claims that can be made about
properties or classes or categories of objects, using the notions of 'all' and 'at least one.'
Universal affirmations:
We very often make claims which say that all things of some sort have some second
property, too. For instance, we might say that 'All roses smell sweet.' There are really two
predicates here, namely being a rose and being sweet-smelling. So, to symbolize this sort
of claim, we could do this:
UD = everything
Rx: x is a rose
Sx: x smells sweet
(x)(Rx ⇒ Sx)
This sentence say, "for anything at all, if that thing is a rose, then that thing smells
sweet," which is a bit stilted as a sentence of English, but which also clearly means just
the same thing as "all roses smell sweet."
Universal affirmations, that is, statements which can naturally be represented in PL using
the universal quantifier and the arrow, are very common in English. All of the following
grammatical constructions indicate a universal affirmation:
PHIL 145 LECTURE NOTES 99
Any X is a Y.
Every single X is a Y.
The Xs are all Ys.
X is a subset of Y.
Whatever X you consider (or "take any X"), you'll see that it is Y.
Xs are Ys.
Each X is a Y.
(x)(Rx ⇒ ¬Bx)
That is, we can write "no rose smells bad" as "for anything you consider, if it's a rose it
doesn't smell bad." But if you think again about what the arrow means, it is ruling out that
the antecedent is true and the consequent is false, so this sentence is also telling us that
So, recalling what we just said above about the relationships between negations and
quantifiers, we see that this is the same as:
Many people find this a more natural way to read the claim that no rose smells bad. That
is, they find it natural to translate that as "it is not the case that there is something which
is a rose and which smells bad". The first and the third of the displayed formulas in this
section are both commonly accepted as standard ways to present universal negatives.
It is important to notice that universal affirmations and universal negatives are not
contradictories of one another!! Notice, for instance, if we let UD be the natural
numbers (i.e., 0, 1, 2, ...) interpret Rx by 'x is a number', and interpret Sx by 'x is odd',
both are false. That is, under this interpretation the first says that every number is odd,
which is false, while the second says that every number is not odd, which is also false.
Since half the natural numbers are not odd (that is, all the even ones are not odd), but the
other half are odd, there are a lot of numbers making each of these claims false.
Finally, there are somewhat fewer common grammatical constructions which indicate a
universal negative than there were which indicate a universal affirmation. But be on the
lookout for constructions like
No Xs are Ys.
There is not a single X that's Y.
Xs don't (or can't be) Y.
PHIL 145 LECTURE NOTES 100
Particular affirmations:
Obviously enough, while universal affirmations and universal negatives are not
contradictories of one another, since they are very common sorts of claims in everyday
life we should expect their contradictories to be common as well. After all, there are not
many sorts of assertions a person can make without somebody coming along to disagree!
It's not hard to see what this will be in the case of universal negatives. We've already
noticed that there are two equivalent ways of expressing a universal negative, one of
which is in the form of a statement like ¬(x)(Rx & Bx). Now, the denial of that will be
the same sentence with an extra negation sign in front of it, of course. But the
fundamental assumption of classical logic is still in force, so by the rule of double
negation elimination, the denial of this sentence is (equivalent to) (x)(Rx & Bx).
Given our earlier interpretation, (x)(Rx & Bx) says that there is an object which is a rose
and which smells bad. That is, it says "there is a bad smelling rose". And if you think
about it, this is obviously the contradictory of "no rose smells bad".
What sorts of English language constructions are used to express particular affirmatives?
A couple that you might expect.
There is an X that is Y.
Some Xs are Ys.
(Notice that in some contexts, 'some' can be used to mean 'more than one', while the
existential quantifier means 'one or more'. We will ignore that complication. In almost all
cases it is sufficient to simply translate 'some' by the existential quantifier.)
There is one other sort of grammatical construction that is used to express particular
affirmations that you should be aware of. Sometimes we express one (or more) predicates
by using adjectives. So, if I say "there is a white Cougar", or "there are white Cougars"
(perhaps I'm responding to someone who denied it talking about large cats, but I thought
he was talking about cars!). My claim is something we could translate as
UD = cars
Cx: x is a cougar
Wx: x is white
Warning: You must be cautious about when it is legitimate to do this and when not.
Sometimes a qualifying adjective is not legitimately regarded as a separate predicate, but
instead marks out a new kind. For instance, a big mouse isn't the same thing as an animal
that is big and is a mouse. It's something that's big compared to other mice. So we would
need to use a single predicate for "x is a big mouse". Or, to stick to our earlier adjective,
white noise isn't something that is both white and a noise!
PHIL 145 LECTURE NOTES 101
Particular negatives:
What will the negation of a universal affirmative be, then? Let's consider the obviously
false sentence "All dogs are brown". We can translate it into PL by
UD = animals
Dx: x is a dog
Bx: x is brown.
(x)(Dx ⇒ Bx)
How do we know it's false? Well, there are white dogs and grey dogs and so on. In other
words, there are dogs that are not brown. Before reading on, try to symbolize that sentence.
Now, the most obvious way to get the contradictory of the universal affirmation we are
consider is to simply negate it, thus ¬(x)(Dx ⇒ Bx). But it's not so clear what that really
asserts. Recall the discussion of negations of quantified sentences ... we can move the
negation sign inside the quantifier, as long as we also change the universal quantifier to an
existential one. So the sentence becomes (x)¬(Dx ⇒ Bx). Finally, when is an arrow
sentence false? You will recall that it is only false if the antecedent is true and the
conclusion is false. So the sentence can be written equivalently as (x)(Dx & ¬Bx). And
that, clearly, is what we are after, since it clearly says that there are dogs that are not brown!
If you look at the form of this sentence you will see that it is very similar to the form of a
particular affirmation. The difference, not surprisingly, is one carefully placed negation
sign.
What grammatical constructions are used to express particular negatives? Here are a
couple to watch out for:
The four types of statement we have just considered have other names, which go back to
the days when people studied categorical logic in its traditional "Aristotelian" form *. Each
is named with one of the first four vowels of the alphabet, and they are often presented by
giving the "square of opposition". We give the standard method of expressing each sort of
each type. Notice that the E and O types each have two distinct standard forms.
*
This sort of study was typically preoccupied with "syllogisms". In it all statements were regarded as
having "subject/predicate" form, but the same grammatical objects had to be able to serve as subjects in one
sentence, but as a predicate in the next. Not having anything corresponding to singular terms, it had to
resort to desperate measures to handle arguments about, for instance, individual people. Syllogisms all had
three sentences, two premises and a conclusion, so a great deal of ingenuity was sometimes called for to
force the arguments people actually give in real life into something approximating syllogistic form. Worst
of all, though, it turns out to be far too weak a logic to capture a lot of interesting and important valid forms
of argument. On the other hand, anything that can be done in traditional syllogistic logic can be easily done
in predicate logic.
PHIL 145 LECTURE NOTES 102
This little box is called the "square of opposition" because in it each sentence is on the
diagonal from its contradictory.
When two statements cannot both be true at the same time, but they can be false at the
same time, we say that they are contraries. If they can both be true, but cannot both be
false, they are called subcontraries. In some dated presentations of categorical logic you
will still find it asserted that the top two are contraries and the bottom two are
subcontraries. This seemed plausible give some old (pre-quantifier) formulations of logic
because people used to think that in order to reason about categories of objects you had to
assume that all categories were non-empty. And if we assume that there is at least one X,
then it is true that A and E are contraries and I and O are subcontraries.
However, one of the benefits of predicate logic is that it allows us to reason
correctly about empty classes of objects. And it is clear that I and O can both be false
under a particular interpretation, namely if there are no Xs! The first says there is
something that is both an X and a Y, but if nothing is X, that's false. Similarly, if nothing
is an X, then nothing is both an X and not a Y. So if we put
UD = living animals
Xx: x is a unicorn
Yx: x is pink
Then it simply is false that there is a pink unicorn, and it is false that there is unicorn that
is not pink, because, to put it bluntly, there ain't no unicorns!
Similarly, we very often make claims in the A and E form where the antecedent
predicate turns out not to apply to anyone. If I say "Anyone who does not study for the
final exam will get a poor mark", that statement can be perfectly true even if it scares
everybody enough that they study and so the antecedent predicate doesn't apply to anyone
who takes the course. In predicate logic, in a universal conditional statement if the
antecedent doesn't apply to anything in the UD, the statement is true. The upshot is that
an A and the corresponding E can both be true at once, so A and E statements are not
genuine contraries. The above example is one of them ... since there don't happen to be
any unicorns, it turns out to be true that all unicorns are pink and that they are not pink.
In any case, it is very important to avoid confusing mere contraries with genuinely
contradictory pairs of statements. In classical logic, exactly one of any pair of
contradictory statements must be true, so they obviously can't both be false at once,
either.
It is important, by the way, to make sure that if two predicates are only contrary to one
another (that is, if a single object cannot have both predicates applied to it, but it can fail
to have both), that one gives a separate predicate letter to each when symbolizing. For
instance, to symbolize 'Bob is not beautiful' and 'Bob is not ugly', we must use separate
letters for 'x is ugly' and 'x is beautiful', since it is quite possible to fail to be beautiful at
the same time one fails to be ugly (and most of us manage that particular feat, it seems to
PHIL 145 LECTURE NOTES 103
me.) Beautiful doesn't just mean "not ugly". You might have noticed that we gave distinct
predicates to "x smells bad" and "x smells sweet" for similar reasons above. An
unscented rose would not smell bad, and it wouldn't smell sweet, either.
Exercises: You can get practice translating sentences of English into PL by using the
exercises on p. 187 of the text (pp. 215-16 of the sixth edition). Note that your answers
will be different from the ones provided in the textbook, but the text does suggest
whether they should be symbolized as A, E, I or O, so you can compare your answer to
the textbook answer using the square of opposition above. Make sure you specify a UD.
Remember that any sentence in A, E, I or O form needs to have two predicates in it, so
you'll need to specify interpretations of two predicates.
UD: machines
Ax: x is an airplane
Gx: x comes equipped with a computerized guidance system.
In the case of SL we could give a simple set of rules. This set was complete in the sense
that, once we made the assumption that every sentence was either true or false but not
both, any valid argument in SL had a proof of its conclusion from its premises using only
those nine simple rules. To give a complete set of rules for PL would be more
complicated than is appropriate in a first course in critical thinking, so we will instead
simply list some simple but important valid rules of reasoning about predicates and
quantifiers.
First of all, all the rules of inference from SL remain valid in PL. (Notice, however, that
these rules apply to sentences whose main operator is ¬, ⇒, & or v. So you can't apply
modus ponens, for instance, to a sentence in A form, since the main operator in an A-
sentence is the universal quantifier, not the arrow. However, if we have the sentence
(x)Px ⇒ Pd, we can indeed apply the rule modus ponens if we can also get (x)Px,
because the main operator in the former is the arrow and not the universal quantifier.
When trying to determine whether a quantifier is the main operator of a sentence or not,
you should reason about it in the same way as you would about a negation sign.)
We have already seen some equivalences for dealing with combinations of quantifiers
and negations signs. For each of the following pairs, you can change from one to the
other, simply justifying it by saying "equivalent". (In these rules we are only concerned
with the very start of the formula: the main operator of the formula must be either the
negation sign or the quantifier, whichever is first. So we simply put [ ... ] to indicate that
we can use these equivalences whatever happens later in the formula (provided of course
that it is the same thing in each of the two equivalent formulas!).
Here are a couple of inference rules whose validity is pretty obvious, when you consider
the meaning of the quantifiers.
n. Pd
(x)Px n, existential generalization
and
n (x)Px
Pd n, universal instantiation
Comments:
• There is nothing special about the use of d here, of course. It is any meant to be any
singular term of the language. This is why we write the rules in bold, you will
recall.
• Of course, similar remarks apply for universal instantiation. Moreover, in this case
we need to be sure to replace x by d at every spot where x occurs, or else we will
have an x without a quantifier to tell us what it means. (In a more rigorous treatment
of logic, we would set out detailed rules and definitions to make clear that such a
beast would not be a sentence of PL, and so doesn't belong in proofs.) So, from an
A-sentence like (x)(Ax ⇒ Gx) we can infer Ad ⇒ Gd, for instance from "All
philosophers work late at night" we can infer that "if Dave is a philosopher then
Dave works late at night".
We are not going to spend much time on proofs in PL in this course. But even with these
few simple rules we can prove that many arguments are valid, including ones like this
one which notoriously, old-style treatments of categorical logic needed to do back flips to
explain:
Example: All humans are mortal. Socrates is human. Therefore, Socrates is mortal.
(x)(Hx ⇒Mx)
Hs
Therefore,
Ms
PHIL 145 LECTURE NOTES 105
We can prove that this conclusion follows from these premises as follows:
(Traditional, i.e. old-style categorical logic needed to turn the second premise into
something like "everything identical to Socrates is mortal", and so treat it like an A-
sentence. This is, to say the least, somewhat unnatural.)
Showing Invalidity
In the case of SL, we showed invalidity by dead reckoning. The goal was to find a way to
assign the values T and F to the sentence letters of all the premises and the conclusion in
such a way that all the premises are true and the conclusion false. This was sufficient
since in SL the only part of the meaning of a basic sentence (i.e. one we symbolized with
a single letter) was its truth or falsity. If we turned up such an assignment of truth values
that was sufficient to provide a counterexample to the claim that the argument is (truth-
functionally) valid.
The same idea of supplying counterexamples applies in the case of arguments formalized
in PL. However, an interpretation now is not just an assignment of truth values, as should
be clear from what has been said so far. Instead, we need to supply a UD, plus
interpretations of all the predicates and all the singular terms that occur in the argument.
For the UD we can pick any collection of "things" (these can be abstract objects such as
numbers, or whatever you like, as long as they are discrete entities of some sort). Each
predicate must be assigned some predicate of English which the members of UD each
either satisfies or fails to satisfy. And each singular term must be assigned to some object
in UD. (So if UD is numbers, d can't get the value Julius Caesar, since he's not a
number!)
Since we are intending to provide counterexamples to the claim that an argument is valid,
we want to pick all these things so that all the premises are true and the conclusion false.
However, while in the case of SL we had a systematic procedure guaranteed to find such
a counterexample if one exists, we don't have one in PL (and if you take more logic
classes, you will see how it can be proved that there cannot be such a systematic
procedure for PL). So we have to rely on ingenuity and insight to find these examples,
though it gets easier as you get more familiar with PL.
Here is an example. Suppose you are asked to show that the following argument is not
valid.
1. (x)(Mx ⇒ Bx)
2. (x)(Kx ⇒ Bx)
Therefore,
3. (x)(Kx & Mx)
PHIL 145 LECTURE NOTES 106
We can reason as follows: To make the conclusion false, we need to make sure that
whatever we decide to do with M and K, nothing satisfy both predicates. But we want the
premises to both be true. The first says that all Ms are B, and the second says that all Ks
are B. Can you think of any examples?
One example that occurs to me has to do with numbers, and we've used it before. Odd
numbers are numbers, even numbers are numbers, but there are no numbers which are
both odd and even. So we can give a counterexample by putting
UD = integers
Kx: x is odd
Mx: x is even
Bx: x is a number
Of course, there are many (indeed, infinitely many!) counterexamples that could be given
for any invalid argument pattern, so if yours is not the same as mine that's not necessarily
a problem. Just make sure that the conclusion is false and both premises are true under
your interpretation.
By the way, the PL argument just shown invalid is a symbolization of the argument
Some monkeys are mammals because all mammals give birth to their young alive
and all monkeys give birth to their young alive.
So we have shown that this argument is (as we say, "quantificationally") invalid by (1)
correctly symbolizing it in PL, then (2) providing a different interpretation of the PL
sentences on which the premises are true and the conclusion is false. Obviously, in step
(2) you are typically going to use an interpretation with a very different subject matter
from the one involved in the English language argument you begin with.
This process is a nice, crisp version of what the textbook calls "refutation by logical
analogy" on pp. 325-327 (pp. 352-64 of the sixth edition). It is better thought of as a part of
logic rather than a special kind of argument by analogy.
Exercises:
You can get some good practice by working on the exercises in exercise set 3 on pp. 203-
204 (pp. 232-33 of the sixth edition). You should ignore the instructions for the exercises
given in the text, though, since they apply to the different system of formalizing arguments
and showing validity and invalidity used in the text.
Instead, you should: (1) for questions 1-14, symbolize the argument in PL, being sure to
provide a UD, plus appropriate interpretations for any predicates and singular terms you
use. (2) For questions 2, 7, 11 give an interpretation which shows that the argument is
invalid.
PHIL 145 LECTURE NOTES 107
Limitations of PL
With the move to predicate logic, we have captured a much larger class of valid
arguments. * However, we by no means have captured all of them. Because of this,
logicians have developed a variety of systems which are stronger still than PL to capture
some of the sorts of reasoning that cannot be handled effectively by PL. So you might run
across references to modal logic (which deals with arguments having to do with necessity
and possibility), temporal logic, deontic logic (for reasoning about obligation and
permissibility), logics of causation, intentional logics (for dealing with the content of
beliefs, desires, and other mental states), and so on.
So, if we symbolize an argument, are confident we have done so correctly, and show that
it is not quantificationally valid, we still cannot say with complete confidence that the
argument is invalid. However, we can be somewhat more confident in this case that if we
merely had showed the argument to not be truth-functionally valid. Indeed, it is probably
safe enough to say as a general policy about such cases that it is up to the person
presenting the argument to point out why the argument should be regarded as valid even
though it is quantificationally invalid. (For example, the person might be able to explain
that the argument relies on principles of modal logic, rather than just PL.)
But there is another limitation we should consider, one of quite a different sort which
applies to both our discussion of SL and our discussion of PL. It has to do with the
fundamental assumption of classical logic. That assumption, you will recall, is that every
sentence is true or false, and not both. And this means that we need to pay a bit of
attention to the clause "if an argument can be correctly symbolized" in SL or PL that we
have been putting into our remarks about what can be showed by using SL and PL.
The problem now under consideration arises because there are declarative sentences of
English which it is not obviously legitimate to assume are either true or false. We have
already encountered some of these when considering vagueness in an earlier lecture. Here
is an example of where vague predicates can lead to problems if we uncritically adopt the
fundamental assumption of classical logic.
*
Even with the rather restricted version of PL we looked at in this lecture, we capture many more
validities. If you go on to take a course in logic, you will find that with some seemingly minor extensions
of the basic ideas covered in these notes you get a large increase in logical power.
PHIL 145 LECTURE NOTES 108
This certainly seems to be a cogent argument. But perhaps you can already see trouble
coming. For now we can have an argument with 3 and 2 as premises and conclusion
Well, premise 1 certainly looks hard to dispute, and it's pretty hard to argue with the
validity of the first argument. So it looks like the culprit has to be premise 2. But what
could be wrong with that?
There are various ways of diagnosing what has gone wrong which have been proposed by
logicians. (In fact, what the correct method is for dealing with vagueness in logic is
currently a very significant area of research among professional logicians.)
Many logicians agree, though, that one way or another the problem comes with
trying to maintain the fundamental assumption of classical logic at the same time we are
dealing with vague predicates. If we are forced to say that 2 is either true or false, we
seem to be stuck with saying that it is true. It certainly isn't easy to see on what grounds
we could say it is false. But, these logicians agree, what we really should say about 2 is
that it is almost true. Since 'bushy headed' is vague, it has 'borderline cases', which is to
say (among other things having to do with the distribution of hairs around the person's
head, I suppose) that there are some numbers of hairs for which it isn't clear whether or
not having that number makes you bushy-headed or not. And in the case of someone with
a borderline number of hairs, we might be rather dubious about whether 2 holds or not.
So, since 2 is a general claim, these logicians would say that it is close to true, but
it's not really true and it's not really false. It's somewhere between those two. Maybe what
is true is that plucking one hair from the head of someone who is clearly bushy headed
won't make him clearly not bushy headed. But if we try to get the same series of
arguments going using that premise in place of 2, it won't work, since we can conclude
from 1 and that claim only that the person in question is not clearly not bushy headed.
And then we can't re-apply the same argument pattern.
PHIL 145 LECTURE NOTES 109
So, according to this diagnosis, the problem is that we have treated a vague concept,
namely 'bushy-headed', as though it were a non-vague concept. If these logicians are
right, vague concepts can result in declarative sentences which are neither true nor false,
but which are somewhere in between those two values. And so we need to be rather
cautious about uncritically applying the fundamental assumption of classical logic, or the
patterns of inference which depend on that fundamental assumption, when we are dealing
with vague concepts. *
The textbook has a section in which it discusses what it calls the fallacy of "slippery
assimilation" on pp. 342-343 (pp. 382–84 of the sixth edition). What it describes is just
this sort of case in which people reason about vague concepts as though they were
precise. Either way of thinking about these questions is fine for the purposes of the final
exam. I confess to being puzzled about why the author of the textbook thinks that fallacy
is a fallacy due to analogies, though.
*
I should briefly mention one other approach which tries to preserve the fundamental assumption of
classical logic. Some logicians argue that it is just a mistake to think that premise 2 is "almost true".
Instead, they say, what has gone wrong is that premise 2 is, despite our first intuitions, completely false. To
defend this view they typically end up saying that there is no problem with classical logic, the problem is
our ignorance about the concept of bushy-headedness. Typically they will say that there is, even though we
don't know it, some particular number of hairs required to be bushy headed. So, if you've got exactly that
number, and you have one plucked out, that moves you from bushy-headed to not bushy-headed. Most
people find this a rather startling conclusion. The defenders of this view suggest that borderline cases are
just cases where we are not sure about the correct answer, but that is a different matter from saying that
there is no correct answer.
PHIL 145 LECTURE NOTES 110
Lecture 13
Of course, this also means we will need to look at what sorts of things can go wrong
in the relationship between premises and conclusions for these sorts of arguments, since it
won't be enough to simply say that the conclusion "doesn't follow". Saying that is simply
noting that the argument is non-valid and we're supposing that in usual cases the arguer
will be well aware of that when she gives an argument of one of the sorts under
consideration here. When evaluating these arguments we need to ask whether the G
condition is satisfied, but doing so is a matter of distinguishing better and worse
arguments, in particular with respect to just how much support the premises provide for
the conclusion. Unlike validity, the question doesn't always have a yes or no answer.
Instead it gets an answer that the premises support the conclusion to some degree or
other. It then becomes a judgment call whether that degree of support is enough that we
should say the argument is cogent.
The next three lectures will be devoted to a discussion of scientific reasoning. Of course,
we are not going to be able to discuss this material in the sort of depth that would be
required if you wanted to be a specialist in some particular scientific discipline. But some
understanding of some key concepts from scientific practice is very useful.
Consider the large number of reports about scientific studies that you run across
from day to day. It is not uncommon for people to think, for instance, that "everything
gives you cancer" (Joe Jackson wrote a song with this title, in fact), because so many
studies are reported linking various substances to various sorts of cancer. The unfortunate
result is that some people decide that there is nothing they can do to avoid cancer causing
agents, so they might as well, say, eat a high fat diet since "no doubt" whatever low fat
foods they ate instead would cause cancer, too. This bad bit of reasoning could end up
cutting years off somebody's life. Or again, one might see a report of a study which
showed that some drug dramatically reduced, say, breast cancer in high risk women, but
also that the substance caused a marked increase in uterine cancer. If you are a woman
with relevant risk factors for breast cancer, what should your attitude be? There is often
enough information in even the typically bad reports of these findings one finds in the
mass press to lead you to a better informed decision about what to believe than you
would get merely by reading the headlines.
PHIL 145 LECTURE NOTES 111
Of course, scientific information is prone to the same sorts of misuse as any other
sort. People with a particular agenda (whether to sell you particular products or to sell
you a particular political platform) will sometimes deliberately mislead you with
"scientific" findings. Moreover, there are particular features of some sciences, especially
the social sciences, which need to be borne in mind when evaluating reports of the
findings of research in these fields. A critical thinker needs to be aware of both of these
sorts of considerations.
The next three lectures will proceed as follows. In Lecture 13 we will discuss correlations,
and the important concepts that are involved in studies which purport to establish that
certain properties are correlated in a population. In Lecture 14 we will go on to discuss the
notion of causal relations between properties, and will consider what needs to be done to
establish that such a relation exists. Finally, in Lecture 15 we will consider some things you
need to know when it comes to reading reports of scientific findings in the press, in
particular with respect to understanding reports of studies in the social sciences.
You will notice that the Lectures and the Lecture Notes go much deeper than the
discussion in the textbook. The material in the textbook is accurate enough, but it is not
enough to get you through this section of the course.
Correlations
Suppose someone were to say to you that "Married men are more likely than unmarried
men to live past age 70, but unmarried women are more likely than married ones to do
so." In doing so, the person has made two claims about correlations. In particular, she has
claimed that among men being married is positively correlated with living past 70, while
among women it is negatively correlated with it. What we want to do in this section is get
straight what such a claim means. To do so will require that we introduce a bit of
terminology along the way.
Some things you can notice by considering the two correlation claims made here:
3. Of course, a variable can have more than two values. One variable that applies
among people is hair colour. There are many more than two colours of hair,
not to mention that some people have no hair.
4. However, for purposes of statistical study, and for careful claims of correlation,
the values of variables should be such that (i) no two members of the population
have more than one value (i.e., the values are exclusive) and (ii) every member of
the population has one value for each variable (i.e. the values are exhaustive).
We will focus on the simplest sorts of correlations, namely those where each variable has
two values. Once you have the hang of these, it's not hard to see how you can generalize
to consider correlations in cases where variables have more than two values.
We need only one more concept before we can state what a claim of correlation amounts
to. Whatever population we are considering, there is some number of members in that
population. Furthermore, for each value of a variable, there is some number of members
of the population which has the property corresponding to that value of the variable. If
you divide the second number by the first, the result is called the proportion of the
population which has the given property. It is often expressed as a percentage, though it
is not uncommon to see it expressed as either a fraction or a decimal.
Now, it is not difficult to extend the notion of a proportion from a whole population
to some subpopulation. A subpopulation is just that part of the population which takes
some particular value of one of the variables. Let's consider how this works with our
above example.
Among the population of men, some percentage lives past the age of 70. This
percentage is just the proportion of men who do so. But we can also consider the
subpopulation of married men. There will also be a percentage or married men who live
past age 70 and it might well be different from the percentage for the first group.
Similarly, there will be a proportion of unmarried men who live past age 70.
Finally, it should be clear enough what a correlation is. We say that two values X and A
of two different variables are positively correlated when the proportion of the
subpopulation with X that has A is greater than the proportion of the non-X
subpopulation that has A. In our first case, if the proportion of married men who live past
70 is, say, 55%, while the proportion of unmarried men who do so is 48%, then there is a
correlation and the claim is true.
We say two values X and A are negatively correlated if the proportion of Xs that
are A is smaller than the proportion of non-Xs that are A. Notice that if being X is
positively correlated with being A, then being non-X is negatively correlated with being
A. We say the two values are uncorrelated if the proportion of Xs who are A is the same
as the proportion of non-Xs who are A.
Section summary: To understand a claimed correlation, you will need to know a few
things. First, what population is the correlation being claimed for? Secondly, what are the
two variables in question, and how is the person claiming the correlation dividing these
variables into values? Third, is the claim of a positive or a negative correlation?
(Typically when people use the word "correlation" without any qualification it is a
positive correlation they have in mind.)
PHIL 145 LECTURE NOTES 113
So, some examples. For the following claims, identify the likely population, as well as
the two variables and the values of each of those variables relevant to the claim made.
(Note that I'm making these up ... I don't have any idea whether any of them are true!)
(1) In Canada, Protestants are more likely than Catholics to regularly eat fish on Friday,
as surprising as that might seem.
(2) Dogs are likelier to kill their owners than are other pets.
Population: Pet owners. Variables (values): Kind of pet (Dog, other); Killed by pet
(yes, no). Positive correlation between dog and yes.
Samples
Investigators get around these practical problems by investigating, not the population of
interest in its entirety, but some sample of the population. On the basis of what is
observed when the sample is examined conclusions are drawn about the population as a
whole. This is a fundamental practice in many areas of science, so it is important to have
some idea of how it works.
Randomness:
You no doubt have heard people use the expression "random sample". This is actually a
rather misleading phrase, since it suggests that randomness is a property of the sample
itself. However, randomness is really a property which may or may not be had by a
process used to select a sample. Each time we select a member of the population and add
it to our sample, it is called a trial. When we are selecting just which members of the
population are to be included in a sample, a process is random if every member of the
population has an equal chance of being selected on each trial. Strictly speaking, then,
we should talk about randomly selected samples, since that is what people mean when
they talk about random samples. There is no way to tell, merely be considering a sample,
whether it was randomly selected or not.
PHIL 145 LECTURE NOTES 114
Suppose, for simplicity, that what we want to investigate is the incidence of some
horrible disease in a colony of ants. The healthy ants are red, and the disease makes them
blue. Suppose also, though we don't know this, that exactly one half of the ants are blue.
We are going to conduct some number of trials, and after each one, we will have
observed some number of blue ants. If we divide the number of observed blue ants by the
total number of trials, we get the frequency of blue ants in our sample to that point.
Now, if we do one trial, and the selection procedure is random, we will get the
answer blue or the answer not blue. So the frequency will either be one or zero, but in any
case it is impossible that we get anything close to 0.5, which is the actual proportion of
blue ants in the population. So a sample of one isn't going to be very helpful.
Now, if we do 2 trials, there are four possibilities for what we might observe: <red,
red>, <red, blue>, <blue, red> and <blue, blue>. Furthermore, since every ant has the
same chance of being chosen each time, each of these is equally possible. But notice that
the frequency of blue is the same for the middle two choices, namely it is 0.5. So there is
a 25% chance that we will get frequency 1, another 25% for frequency 0, and a 50%
chance of frequency .5. So we have a 50% chance that we would get the right answer in
this case.
There are eight equally likely possibilities for a sequence of three trials: <red, red,
red>, <red, red, blue>, <red, blue, red>, <red, blue, blue>, <blue, red, red>, <blue, red,
blue>, <blue, blue, red> and <blue, blue, blue>. We now have a 1/8 for each of frequency
0 and 1, and a 3/8 chance for each of frequencies .33 and .66. Note that we can't get
precisely the right answer in this case, but we now have a .75 chance of getting the results
which are within .17 of the right answer.
It is not too difficult to work out the sixteen possibilities for the results if we have
four trials. In this case we have only a 3/8 chance of getting the answer 0.5 (notice that
this means we have less chance of getting exactly the right answer than we had with only
two trials!). However, we now have a 7/8 chance (i.e. about 88%) of getting a frequency
between .25 and .75.
By now you should be seeing a pattern. The more trials we do, in general, the less
likely we are to get exactly the right answer, but we are far less likely to get an answer
that is very far from right. Indeed, after five trials, we have a 62% chance of getting either
.4 or .6 as a frequency, and a 94% chance of getting between .2 and .8 as a frequency. By
the time you get to 25 trials, you have a 95% chance of getting a frequency of between .3
and .7. By 100 trials, you have a 95% chance of getting between .4 and .6. By 250, it is a
95% chance of between .42 and .58, and by 500 trials it is between .46 and .54.
PHIL 145 LECTURE NOTES 115
You have no doubt noticed that it is very common in reports of the findings of polls to
hear the phrase "accurate to within n%, 19 times out of 20". This is a report of the margin
of error for a given sample size. It is a report, in other words, of what we said about
sample sizes of 25, 100, 250 and 500 trials. It tells you that 19 times out of 20 (i.e., 95%
of the time), the distribution of a property in a population will be in the range of observed
frequency plus or minus the margin of error. So if the observed frequency in the sample
is 65%, and the margin of error is 2%, then 95% of the time, the actual incidence of the
property is between 63 and 67 per cent — assuming that the selection procedure was
random.
Now the reason that it is useful to have larger samples can be expressed more precisely.
Presuming the selection procedure was random, a larger sample has a smaller margin of
error.
How would we use margins of error to evaluate the evidence for someone's claim that
there is a positive correlation between being male and living past age 70? Suppose his
evidence was that he had observed 700 married and 250 unmarried men (now dead),
randomly selected from the population at large. Suppose he had observed that 52% of
married men, but only 49% of unmarried men had lived past the age of 70. The margin of
error for a sample size of 250 is 6%, while the margin of error for 500 trials is 4%. In this
case, the range for being 95% confident is, respectively, from 48 to 56 percent for
married men, and from 43 to 55% for unmarried men. So for all this data tells us, we
cannot even be confident that being married is not negatively correlated with living past
age 70.
A useful guideline for trying to evaluate claims that certain evidence supports belief
in a correlation, then, is this. Figure out what the margin of error is for each of the relevant
samples. (This might involve some educated guessing!) Then, presuming that the report
tells you what the actual observed percentages are in each case (or gives you enough
information to figure that out), figure out what the relevant range is for being 95%
confident for each value (in other words, for each percentage start with the observed
frequency, and get the range by first subtracting the margin of error from it, then by adding
the margin of error to it). If the ranges overlap, then the cited evidence does not provide
good evidence for a correlation (it is not statistically significant, as we sometimes say).
Here are some "rules of thumb" for remembering what the margin of error is for samples
of various sizes. For 25 trials, it is about 25%, for 100 about 10%, for 500 about 5%, for
2000 about 2%, and for 10,000 it is about 1%. *
Example: Suppose a report in the newspaper said: "A new Gallup poll shows that women
are more likely than men to support the Liberal Party. The survey, conducted last week,
asked 1000 people which party they would support if an election were held tomorrow.
52% of women, but only 46% of men, said they would vote Liberal."
*
These numbers are taken from Ronald N. Giere's book Understanding Scientific Reasoning, fourth edition
(Fort Worth: Holt Rinehart Winston, 1997). The discussion throughout this lecture is greatly indebted to
Giere's discussion of related matters in that book.
PHIL 145 LECTURE NOTES 116
To see whether we should accept the claim, we would need to think as follows.
First, what is the population? It's presumably the adult population of some political
jurisdiction (which is unspecified in the example, though in an actual newspaper article it
would surely be made clear whether it was Ontario, Canada, or whatever). What are the
variables? They are sex, with values male and female, and answer to the question "What
party would you support if an election were held tomorrow?", with values Liberal and
Other. The correlation claimed is that being female is positively correlated with saying
you will vote Liberal.
Now, do the cited numbers provide good support for this claim? Well, we need to
do some educated guesswork to answer that question. First of all, the overall sample is
1000 people. Gallup is a reputable polling company which no doubt would have ensured
that about half the interviewed people were of either sex. So it's pretty safe to figure each
sample is about 500, so each sample has a margin of error of about 5%. But then the
ranges are from 42 to 57% for claimed Liberal support among women, and from 41 to
51% among men. So this survey by itself doesn't supply good evidence for the claimed
correlation, since the ranges overlap considerably.
Remark: Notice that if we have a genuinely random method of choosing members of the
population for our trials, then the margin of error is determined completely by the size of
the sample. In particular, it doesn't depend at all on the size of the actual population. (If
you know the terminology, you will see that our description of what it means for a
selection procedure to be random means that we are sampling "with replacement". That
is, even after a member is picked once, it needs to be available for selection again, if we
are to have a truly random selection procedure.) So it doesn't matter if the population has
10 million members or only 10, 50 trials will still only give you an estimate of probability
with a very large margin of error. Of course, an obvious lesson to draw is that if you're
interested in a population you know to have only 10 members, a study based on random
sampling is not the wise choice to make when choosing your experimental methods!
When someone presents you with data obtained by considering a sample then makes a
claim about the population in question as a whole, she is really offering you an argument.
Let's consider two simple sorts of such arguments in an abstract form:
Arguments of Type 1 are what lie behind simple statistical claims – i.e. behind claims
that certain proportions of a population have the differing values of a single variable.
Type 2 arguments are relevant to claims about correlations.
PHIL 145 LECTURE NOTES 117
What to look for when evaluating such an argument: One common mistake in Type 1
arguments of this sort is just a special case of something we've seen before. It's very
common for people to try to draw a conclusion which is not suitably qualified from a
sample. So, people will say "Tory support is at 22%", for instance. This is not a
conclusion you can draw from a sample.
Here we will always have a margin of error, and the "probably" will be part of it ... 19
times out of 20 (for instance), the observed result will be within n%.
As we have seen, in the case of Type 2 arguments, you need to be on your guard to see
whether the data cited really provides statistically significant evidence for the correlation.
This is not so difficult to do in a rough and ready way, and by the time evidence gets
from a scientific study to a report in the press it might well include claims not made by
the original researcher, so this is worth watching out for.
The biggest area of concern, though, is usually the (typically unstated) claim that the
sample is probably representative of the population as a whole. As we have seen,
randomness is a good way to ensure this. However, in the real world in which studies
must be done with limited resources and where some members of populations are
typically very difficult to get access to for study, there will nearly always be departures
from randomness in the selection of samples.
The task, then, when evaluating arguments of Types 1 and 2, is to try to figure out
what the departures from randomness were in the particular study under consideration,
and to consider whether these departures are likely to skew the results.
The topic of how to consider the question of when a sample is likely to be biased and
what researchers can do to improve the representativeness of samples is discussed in
Chapter 9 of the textbook.
Advertisers are particularly likely to make use of "research" which makes use of
biased samples to ensure it returns results favorable to the product being marketed. The
case of a cigarette company polling smokers of its own brand of cigarettes, asking them
which brand they thought tasted best, then touting the results as representative of what all
smokers would say is, sad to say, not particularly unusual. The lesson is that you should
be especially careful to scrutinize the methodology of "research" reported in
advertisements.
But not all biased samples are the result of deliberate attempts to mislead. Some
particular cases of biased sampling are both instructive and rather funny.
1. Famously, a poll taken just before the 1936 US Presidential elections predicted a
Republican landslide. The result, though, was a Democratic landslide. The problem
was that the pollster had conducted a telephone poll. At the time, many people in
the US didn't have telephones, and those who did tended to be both wealthier and
Republican. Thus most of the supporters of the Democrats could not possibly have
been selected for the poll. The pollster went out of business.
PHIL 145 LECTURE NOTES 118
The lesson here is that if you want to draw conclusions about a population
which is stratified in ways which might be relevant to your concerns, and you can't
perform a genuinely random sampling of the population, you need to do something
to make sure that your non-random method of sampling doesn't pick
disproportionately from one stratum. This is a lesson that pollsters have learned in
the wake of disasters like the 1936 example, and so they now perform "stratified
sampling" to try to get around just this sort of problem.
2. Another example, which is borrowed from Giere's book (op cit., p. 186), is a result
from the 1993 Janus Report. This was a report based on 2800 responses to a
questionnaire distributed to 4500 people, asking questions about sexual behavior. In
it, 70% of those over the age of 65 reported having sex once a week. The problem
is, to get enough respondents over 65 so that the proportion of people over 65 in the
sample matched that in the overall population, the investigators sought out
respondents in sex therapy clinics. Another study, which used random sampling,
found only 7% of people in this age group reported having sex as often as once a
week!
The lesson is that a biased sample, even if it's a large one, can produce results
which are wildly off the mark. Studies of sexual behaviour are quite likely to have
difficulties of this sort. Many popular "studies" are the product of questionnaires
distributed to the readers of particular magazines (e.g., Playboy, The Redbook;
similar methods were used to distribute the questionnaires for The Hite Report), and
the readers of these magazines can reasonably be thought to have different views on
the subject from those of the population at large. Furthermore, as with all
questionnaires, the respondents are volunteers of a sort. One would therefore expect
them to be the people most interested in the subject in question, and so probably
unrepresentative of the population at large in important ways. Of course, other
possible methods of investigating this subject face other problems. Door-to-door
interviews, for instance, might well do a better job of getting a more random
sample, but since many people are embarrassed when talking about sexual subjects
to another person, the honesty of their responses in face-to-face interviews is open
to doubt.
We will return to related problems with sampling in the next two lectures.
PHIL 145 LECTURE NOTES 119
Lecture 14
Eating ice cream causes public nudity! Statistics show that consumption of ice
cream goes up and down in lockstep with arrests for public nudity.
There are actually several things wrong with this little argument, but it's not hard to pick
out the main problem. When the weather gets hot, people are more likely to eat ice
cream, and they are also more apt to remove their clothing (whether just to get more
comfortable, or because there is no need to fear frostbite, as the case may be). The arguer
in this case seems to have made the mistake of confusing a correlation for a cause, and
we can see that this is probably so because there is an obvious common cause of both ice
cream consumption and nudity, namely hot weather.
Once again, the material in this lecture does not conflict with anything that is said in the
textbook, but it does present things which are not contained in the textbook.
When I ask people to explain what is wrong with the above argument, if they don't say
that the person has confused a correlation and a cause, they say something like, "Well, if
you stopped everybody from eating ice cream, that wouldn't do anything to reduce the
rates of public nudity. (In fact, it might increase it, since people wouldn't be able to cool
off by eating ice cream, so they might be more inclined to take off their clothes!)"
This response is actually quite insightful. When someone makes a claim about a
correlation, she is making a claim about the situation in the world—she is claiming that
among the members of this population, a higher percentage of Xs than non-Xs have P.
When someone makes a causal claim, she is not so much making a claim about the world
as it is. Instead, she is making claims about how the world would be if certain changes
were made.
So, if someone claims that eating ice cream causes public nudity, she is, among
other things, claiming that if nobody ate ice cream, then there would be less public
nudity, and if everybody ate ice cream, then there would be more public nudity (provided,
in both cases, that bringing about a situation in which everybody or nobody ate ice cream
didn't also introduce some other change that affected the amount of public nudity. For
instance, if we made it the case that everybody ate ice cream by tying them up and
forcing it down their throats, the fact that they're tied up would prevent them from taking
PHIL 145 LECTURE NOTES 120
off their clothes. The causal claims should be understood to rule this sort of thing out. We
register that this is what we mean by saying "if x happened, then y would happen, all
other things being equal.")
Notice that we haven't said the claim that eating ice cream causes public nudity means
that if everybody ate ice cream, then everybody would take his clothes off in public. The
way we have formulated things is really appropriate to describing what someone means
when she claims that something is a positive causal factor for something else. This is
appropriate because this is the way we most commonly speak. We say, after all, that
smoking causes lung cancer, even though not everybody who smokes gets lung cancer.
Similarly, we often speak of one thing preventing another, even though it doesn't do so in
all cases (e.g., regular exercise prevents heart disease, use of contraceptives prevents
unwanted pregnancy). In this case we are speaking of negative causal factors.
Speaking in terms of causal factors instead of causes is useful because it makes clear
that we are allowing that there are other factors which are also relevant to producing/
preventing the effect in question. Indeed, the fact that smoking doesn't always produce lung
cancer leads us to investigate questions like: What's the difference between those people in
whom smoking leads to lung cancer and those in whom it does not? That is, there might be
some other factors which, together with cigarette smoking, do guarantee lung cancer. On
the other hand, there are plenty of people who get lung cancer without ever smoking, and
so we also want to investigate what other sorts of things cause lung cancer.
Sometimes, as the textbook points out, people will insist that if you've only isolated
causal factors for some effect, then you haven't found the cause. Someone might say that
smoking doesn't cause lung cancer, because her aunt Helen smoked until she was 94
without getting cancer. Here the person would be insisting that the cause of lung cancer
must be a sufficient condition for producing its effect. Or someone else might insist that
smoking can't be the cause of lung cancer, because he knows of people who never
smoked but still got lung cancer. Here the person is insisting that the cause has to be a
necessary condition for the effect.
We will not follow this way of speaking. We will treat what the text calls "necessary
causes" and "sufficient causes" as special cases: namely as cases where there are very few
causal factors which need to be taken into account. In a complicated world, the usual
practice of speaking of causes when we mean causal factors seems eminently sensible.
Finally, it is important to bear an important fact in mind about causal factors. It is quite
possible for one thing to be a causal factor for something else even though it is not the
strongest causal factor. As will become clearer when we discuss "controlling for
variables" below, part of investigating causal factors is taking into account other causal
factors. But it is important to understand that it can be the case that one factor for, say,
preventing heart disease, is regular exercise even though percentage of body fat, say, is
an even more important factor. If this is the case, then among people with high body fat,
if it were the case that all of them regularly exercised there would be less heart disease
than would be the case if everyone did not regularly exercise, and among those with
lower body fat, if everyone regularly exercised there would be less heart disease than if
nobody did. This is independent of whether there was a lot more heart disease among
those with high body fat than among those with lower body fat.
PHIL 145 LECTURE NOTES 121
A crucial difference between correlational and causal claims is that a causal claim makes
claims about hypothetical, rather than actual situations. We saw how we can overcome
the practical difficulties involved in investigating large populations in the actual world in
the last lecture, when we considered what is involved in getting evidence for a correlation
claim.
Obviously, things are somewhat trickier when it comes time to investigate causal
claims. The thing about hypothetical situations is that they are not actual. So while we can
find out about correlations by going out and looking at (a sample of) the actual world, we
can't do this for causal claims, because one cannot simply go out and look at something
which is non-actual! That is, if someone claims that a high fat diet causes, say, breast
cancer, that person is claiming, among other things, that there would be more breast cancer
in a world where everyone ate a high fat diet than a world in which nobody had a high fat
diet, other things being equal. But we can't go out and investigate randomly selected
samples from a population in each of these worlds, because these worlds don't exist!
Thankfully, this doesn't mean that we are powerless to investigate causal claims. While
we cannot go to these other non-existent worlds where, for instance, everybody eats a
high fat diet, we can approximate those worlds, more or less well, here in this world. We
do this by carefully designing and implementing experiments.
The basic idea is this. Suppose we want to test whether one thing C (say a high fat
diet) causes something else E (say, breast cancer) in some population (say, women).
What we need to do is to find or produce two samples from our target population where
the only relevant difference between the two samples is that all the members of one group
X (the experimental group) have C (in this case, they eat a high fat diet), while another
group K (the control group) do not have C. We then check to see whether the
experimental group has a higher rate of E than the control group.
Consider what this sort of experiment tells us. If both X and K are representative of
the population, and if it really is the case that the only relevant difference between X and
K is that E has the proposed causal factor C while K does not, and if there is a statistically
significant difference in the rate of the proposed effect E, then this gives us good reason
to believe that C is a causal factor for E. After all, each of X and K is in all relevant
respects, other than whether it had C or not, just like the population in this world (that's
what it means for there to be no relevant differences), and so it tells us what we could
expect if the world were just like this one except that everyone (or no one in the case of
K) had C. Thus the rate of E in each of these populations tells us what the rate would be
in such a world (subject, of course, to the limits always involved in sampling large
populations that we discussed in the previous lecture).
Notice that we have three conditions here (each of the clauses preceded by an 'if' in the
first sentence of the last paragraph). There are thus, speaking in very general terms, three
different sorts of things that can go wrong in an experiment: (1) X and/or K might not be
representative of the population at large; (2) there might be other relevant differences
between X and K besides whether they have C or not; (3) the observed difference might
not be statistically significant. We will return to discuss (1) and (2) in some detail below.
PHIL 145 LECTURE NOTES 122
We briefly saw how you can do a very quick and dirty estimation of whether or not
an observed correlation is statistically significant in the last lecture. The techniques to be
employed in the case of a causal experiment are quite similar. How large were the X and
K groups? What, then, is the approximate experimental error for each? What was the
observed percentage of E in each of X and K? Then we can calculate the range in which
we could expect with 95% likelihood that E would occur in a world where everyone or no
one had C. If these overlap significantly, then the experiment doesn't provide strong
evidence that C is a causal factor for E. If there is no significant overlap, we do have
good evidence (when (1) and (2) are also met).
But in a well done experiment we learn more than this. In an experiment where we can be
quite sure that (1) and (2) are met, we can also get a good estimate of how strong a causal
factor C is for E. If, for instance, the confidence interval (i.e. the interval between the
observed frequency minus the margin of error and the observed frequency plus the
margin of error) for E occurring in X is from 21 to 31%, while for K it is from 4 to 14%,
we know that the difference between the occurrence of E in a world where everyone had
C compared to one where no one had C would be between 7% and 27% (i.e., 21 - 14=7,
and 31 - 4=27).
This would allow us to compare this factor for E with other factors which might be
identified in another experiment. If C is a strong causal factor for some important E (for
instance, it is a significant cause of breast cancer), while some other cause of E is less
significant, this can be important information when we try to decide what to do to prevent
E. Should we spend scarce health education dollars on encouraging women to eat low fat
diets, or should that money go somewhere else? If a high fat diet is a more significant
causal factor that will certainly be one thing we need to take into account in making that
decision. (Of course, it will only be one part of that decision. If we had evidence that
public education measures of that sort didn't work, then that should clearly affect our
decision, too.)
It is very important to notice that saying that C is a causal factor for E does not mean that
C is the only, or even is the most significant causal factor for E. For instance, a well
known study of Harvard alumni seemed to show that regular exercise was a negative
causal factor for coronary heart disease. Of course, it immediately occurs to most people
that the real cause of heart disease might be obesity, and the exercise merely kept some
people's weight under control and so indirectly prevented the heart disease. This of
course also occurred to the researchers, so they compared not just exercisers and non-
exercisers, but also considered the rate of heart disease among exercising overweight
people vs. non-exercising overweight people, and also exercising non-overweight people
vs. non-exercising non-overweight people. In both these comparisons of smaller groups,
it once again turned out that exercisers had less heart disease than non-exercisers. So,
while it might well be true that whether one is obese is a more important causal factor
than exercise, exercise by itself is a negative causal factor for heart disease,
independently of its effect on weight, if this study was well conducted.
What the experimenters did in this case is an example of controlling for variables
(in this case they controlled for the variable of being overweight). This is an important
technique we will return to below.
PHIL 145 LECTURE NOTES 123
We will briefly describe three of the most important sorts of experiments you might see
described in press reports of scientific studies.
Randomized Experimental Design. If you think about how random selection could help
set up a causal experiment, the obvious place to look is in ensuring that (1) and (2) above
are met. If we randomly select members of the target population to experiment on, then
we can be quite sure that they do not differ from the population as a whole in any relevant
respects. Then if we divide the experimental subjects into X and K by a random
procedure, we can be quite sure there are no significant differences between them.
Of course, this can only be done if C is something that it is in the experimenter's
control to impose on the X group and prevent in the K group. However, when this is the
case, this approach has very real theoretical advantages over the others we will look at
below. For in this case we can be as sure as is ever practically possible that C really is the
relevant variable behind any observed difference in the frequency of E in the
experimental and control groups.
This sort of experiment is quite commonly performed on, for instance, non-human
animals. While the lab rats or mice or whatever are not actually randomly selected from
the population of rats (they are in fact specially bred for lab use in most cases, because
this actually ensures better uniformity within the experimental populations with respect to
certain characteristics than one would find in the rat population as a whole. This is
appropriate because the real interest of people performing these experiments is not really
with whether C causes E in the population of rats as a whole, but with drawing inferences
from their results to what could be expected in humans. We will have more to say about
this further step of inferring from rat-results, say, to human populations in our discussion
of arguments from analogy in lecture 16.), the division into X and K groups is random.
The lab animals are pretty much completely under the control of the experimenters, so
they can impose C on the X group, and ensure that the K group does not have C. We are
thus in a particularly good position to tell whether C is a causal factor for E or not.
Of course, while the randomization at the start of an animal experiment means the
experimenters are in a good position to be sure that at the start of the experiment there is
no significant difference between X and K, they need to be very careful to ensure that
when they introduce C they don't also introduce some other relevant difference between
X and K! So, for instance, if C involves some surgical procedure, the experimenters will
typically do "mock surgery" on the K group, so they can be sure that it was C that caused
any observed difference in the rate of E and not, say, the anaesthetic used during surgery.
The key thing to look for to spot a randomized experimental design, then is this: (1) first,
the groups X and K are selected; (2) then, after the selection, C is introduced into one
population and not into the other.
While these experiments are most often conducted on animals (or plants, or inanimate
objects, even), they are sometimes conducted on people. The most famous examples of this
sort are the so-called double blind studies used, for instance, to test the effectiveness of
drugs. There are special complications that come into it when experimenting on people.
PHIL 145 LECTURE NOTES 124
In such studies, the experimental subjects are put randomly into the X and K
groups. Then one group, X, gets the experimental drug (C), while the K group does not.
However, just as experimenters needed to perform sham surgery to avoid introducing
relevant differences between the X and K groups of animals in the example above,
experimenters on people need to avoid introducing such differences, too. One well know
relevant variable is the placebo effect. People are interesting creatures, and one
interesting thing about them is that if they are taking something that they think will help
them, then they will often feel better than if they are not taking it. So the members of K,
while they do not get C, will get some substance already known not to affect the
incidence of E, but which they can't tell from C. The first blind, then, is that the
experimental subjects do not know whether they are taking C or something else.
What is the second blind, and why is it needed? Very often what is being tested for
in medical experiments are things which require expert diagnosis. Suppose it's something
as simple as, say, throat inflammation. Unfortunately, when a doctor looks into
someone's throat it is often not a clear cut matter whether a throat is inflamed or not.
There are borderline cases. And another interesting thing about people is that how they
classify borderline cases can be affected, even if not consciously, by what they believe.
So if a doctor thinks a particular substance is indeed the long sought cure for throat
inflammation, she might well classify borderline cases to confirm this result in spite of
her best efforts to be evenhanded. Careful experimenters therefore ensure that the people
making this sort of diagnostic judgment call don't know which subjects are in X and
which are in K. They, too, are blind in this respect.
So, there are some experiments with randomized experimental designs which are
conducted on people. However, these are rather rare because there are often serious
moral concerns which prevent us from conducting such experiments on people. Some
people have raised questions about whether it is right to test just how toxic, say, dioxins
or tobacco are by subjecting rats to them. Be that as it may, there's no room for
questioning that it's not okay to subject people to potentially lethal substances against
their will or without their knowledge for the sake of experiment. And this is precisely
what needs to be done in experiments with this sort of design. So while such experiments
would give us the highest quality evidence on questions about the effects of such
substances on humans, it is evidence that we could never have available in a morally
decent world.
Suppose, for instance, that we wanted to test whether smoking causes some particular
heart disease. We've just mentioned that it's not permissible to subject people to tobacco,
since we know it has so many other harmful effects. However, it is an unfortunate fact
that many people continue to smoke. Experimenters can use this fact to help them select
the X and K groups. Since C is smoking in this case, they can select a sample from
PHIL 145 LECTURE NOTES 125
among the smokers to be X, then select another sample from among the non-smokers to
be K. In this case they needn't impose C on X, since the members of X are imposing it on
themselves, so to speak. They can then sit back and wait some amount of time, then
check the rates of E among the two groups.
The key difference, then, between a prospective experimental design and a randomized
experimental design is that in the second it is the experimenter who decides which
experimental subjects get C (and so belong to X) and which don't get C (and so belong to
K). In a prospective study, the experimental subjects are self-selected, in that it is
determined before the experiment begins whether they are candidates for X or for K. It is
not up to the experimenter whether or not they have C.
(Or course, the term "self-selected" needs to be taken with a grain of salt! If we are
dealing with something such as "exposure to asbestos", say, then that might be
determined by where someone lives, which is less under the person's control than is
whether or not they smoke.)
The main difficulty with such experiments, which should not be a surprise at this point, is
that the benefits of randomness are missing. In particular, since the experimental subjects
select themselves into the C and non-C groups, we have no assurance that there are not
other relevant factors, besides having C or not, which most members of one group share
and the other do not. Smokers might, for instance, be less likely to exercise than non-
smokers.
A good experimenter must therefore try to approximate what random selection
would have provided automatically. In particular, she must do what she can to make sure
that observed differences in the rate of E are the result of the presence or absence of C,
and not of some other factor. The key technique for doing so is called controlling for
variables.
We described a case where a group of experimenters "controlled for the variable of body
weight" above. The basic idea is simple. Rather than simply comparing the rate of E in X
and K, we must compare the rate of E in selected subgroups of K and X as well. So, if we
think that exercise is a possibly relevant variable for heart disease, and our goal is to find
out whether smoking is a factor for heart disease, we can't merely compare a smoking
group with a non-smoking group. We also need to compare the rate of heart disease
among: exercisers who smoke vs. exercisers who do not, and non-exercisers who smoke
vs. non-exercisers who do not. If smoking is a causal factor for heart disease, we should
see higher rates of heart disease in both these cases among the smokers than among the
non-smokers. If this were indeed observed, that would be good evidence that at least part
of the observed higher rate of heart disease among smokers than among non-smokers is
due to the smoking, or at least that it does not simply arise because more smokers are
non-exercisers.
K group are the male applicants, and the groups are "self-selected" (note the artificiality
of that terminology in this case).
Now, someone might suggest that the people who conducted the original study
failed to take into account a relevant variable, namely the faculty applied to. Suppose
there are only two faculties at Diddly, namely Arts and Sciences. Now suppose that a far
higher percentage of women than men apply to Arts, and that the acceptance rate in Arts
is much lower than in Sciences. Then we might get the following results: the acceptance
rate for men who apply to Arts is 40%, just as it is for women, and the acceptance rate for
men in Sciences is 75%, just as it is for women. If this happened, it would show that it
was a mistake to think that being a woman was a negative causal factor for acceptance to
Diddly. The relevant causal factor would appear to be applying to Arts, and the lesson
that if you want to go to Diddly your odds are much better if you apply in the sciences,
whether you are male or female.
(Of course, this result would raise interesting further avenues of investigation.
Obviously applying for arts doesn't make one likely to be female, but is being female a
relevant causal factor for applying for Arts? (Do high school counselors direct their
female students one way and their male students another, we might want to find out, for
instance.) Does the fact that more women apply to Arts figure in the explanation for why
the acceptance rate is higher in Sciences than in Arts? But the present point is that when
the variable of what faculty is applied to is taken into account we see that the original
evidence did not support the hypothesis that being a woman was a negative causal factor
for being admitted to Diddly.)
The key points for spotting a prospective study, then, are that the experimental subjects
are "self-selected" into the C and not-C groups. The experimenter then samples each
group, then (often much later, say years or decades later in the case of experiments on the
effects of smoking) determines the incidence of E in each group.
Retrospective Studies:
There are moral problems with doing randomized experiments on people. Prospective
studies can take a very long time, and are often very expensive. Researchers therefore
will often do a different sort of experiment which is easier and cheaper, but which also
produces results which are much less useful.
The key difference between a retrospective study and a prospective study is in how the
candidate are divided into groups. While in a prospective study the experimenters divide
into groups on the basis of whether subjects have C or not, then go on to find rates of E in
each, the X and K groups are selected quite differently in a retrospective study.
In this sort of study, the X group is selected from the part of the population who have
E. The control group is then "constructed" from the part of the population which does not
have E. We then "look back" to see what proportion of each of these groups had C. If more
of the X group had C, then we have some evidence that C is a causal factor for E.
For example, suppose we thought that it might well be the case that eating eggs was
a positive causal factor for baldness in men, but nobody would give us a grant large
enough to do a prospective study. If we wanted to do a retrospective study, we would
begin with a sample of bald men. This would be our X-group. The next step would be to
"construct" the control group. This is typically done by matching for relevant variables.
PHIL 145 LECTURE NOTES 127
That is, when we select the control group, we can't simply select anybody at all from
among the non-bald men. Instead, we try to produce a group who, in all the respects
likely to be relevant which we can practically manage, match up with the members of our
X-group.
The reason we need to do this is that we have an X-group which is very likely to be
biased in important ways. Randomness in a randomized experiment was our way of
ensuring that it was very unlikely that there was a relevant difference between the X and
K groups. Now we will need to use our own estimation of what is relevant and what not
and do what we can to bring about through our own labor what randomness provides
automatically.
In the present example, we'll probably want to make sure that the members of the K
group match up with the X group for age. Perhaps, since eggs are widely reputed to be
implicated in heart problems and older men are more likely to be worried about heart
problems than younger men, if one group were significantly older than the other on
average we might expect the older group to eat fewer eggs for that reason. Perhaps we
would also want to control for income (eggs are a relatively cheap source of protein), and
several others. The goal, to reiterate, is to create a K group which is as much like the X
group as possible, with respect to variables we think likely to be relevant, except that the
members of K do not have E, i.e., they are not bald.
Once the control group is constructed, we compare the rate of C (i.e., egg
consumption) in each group. If it is higher in X than in K, we might have evidence for the
hypothesis that eating eggs causes baldness in men.
Of course, there is no real possibility of perfect matching when constructing a
control group, so just as in a prospective study a sensible experimenter doing a
retrospective study will carefully look at her data and control for various other variable
she was unable to match for.
Retrospective studies provide the weakest evidence of the three sorts of experiments we
have considered. The main problem is that even if they are done very well, a
retrospective study controls for only variables that are already known or thought by the
experimenters to be relevant to the case at hand. A genuinely randomized experiment, on
the other hand, controls for all variables (except perhaps those introduced by the
experimental procedure itself). Prospective designs fall somewhere in the middle. A
retrospective study can provide good enough evidence to warrant spending the resources
on a more expensive and time consuming prospective study, perhaps, but it should not be
regarded in itself as definitive.
When you encounter a report about a study which is supposed to establish a causal
relationship on the basis of an experiment involving sampling, it is a good idea to go
through the following steps.
1. Determine the key parts of the study. What are the relevant variables (i.e., what are
C and E)? What is the population from which X and K are drawn? What sort of
experimental design was it? (Most of the time these reports will provide some
information about how the study was conducted.)
PHIL 145 LECTURE NOTES 128
2. If the experiment had a randomized design, you will want to consider the following
things. First, if the experiment was conducted on non-human animals, remember
that there is a further argument required before conclusions can be drawn about
whether C causes E in humans (as we'll discuss in Lecture 16). Secondly, is it
reasonable to think the experimenters were careful about ensuring randomness
when dividing subjects into the X and K groups? Finally, does it look like the
process for introducing C into the X group but not into the K group was likely to
introduce other relevant differences between the groups? (E.g., in the case of a
study on humans, was it a proper double blind study?)
3. If the study was prospective or retrospective, can you think of other relevant
variables which might explain away the results observed in the study? Does it look
like the experimenters controlled for such variables?
4. If the study is retrospective, remember that the support it can provide for a causal
claim is provisional at best. Also, in addition to considering whether the
experimenters controlled for variables, you will want to know whether they have
done a sufficient job of matching the control group to the experimental group.
PHIL 145 LECTURE NOTES 129
Lecture 15
In this lecture we will consider some problems which, while in many cases not
exclusively something we need to be on guard about in considering reports from the
"social" or "human" sciences, are particularly likely to be of concern in these cases.
Some of these problems are a result of the subject matter studied in these sciences.
While there are enormous practical problems involved in figuring out the behaviour of
strands of DNA since they are so small, they at least don't change their behaviour when
they know they are being studied! Moreover, studying people and the societies in which
they live means studying things for which it is very hard to develop a precise and
unambiguous vocabulary which nonetheless refers to those features of the thing we care
about.
Other problems arise because it is the results of the social and human sciences
which are often most important for determining what would be a sensible political
decision in a particular context, or what would be a reasonable personal economic
decision. So people often have significant economic or political interests in convincing
you of something, and some of them don't scruple about "massaging" their research so
that it seems to give results that support their interests.
Operationalization: Social sciences are concerned with the features of human existence
that we are all concerned with. However, they typically want to investigate these matters
quantitatively, and this is something that it is not always straightforward to do. The
problem is that the properties of humans and societies that we care about, and that we
have words for, are not things that are straightforwardly measurable. As a result, the
social scientist must replace the non-measurable everyday concept by some measurable
and precise analogue for the purposes of the study. This process is called
"operationalization".
The point that needs to be noticed, though, is that I.Q. is going proxy for an ordinary
term, "intelligence", that has more layers of meaning and is less precise that the
operationalized term. This needn't be problematic if we keep straight just what is being
studied.
PHIL 145 LECTURE NOTES 130
However, misleading claims can result from the way in which the researcher
chooses to operationalize her terms. For example, in the ordinary sense, we think of
various kinds of competence as manifestations of "intelligence", including "coping
ability" and competence in dealing with abstractions. And there is nothing about the
ordinary concept of intelligence that provides an answer to such questions as this one:
Which of these two manifestations of intelligence is more important? Nor does the
ordinary notion of intelligence that we have suggest any way of amalgamating levels of
performance with respect to the components of intelligence. And yet when "intelligence"
is operationalized as I.Q., decisions are made concerning these matters: coping ability is
ignored and intelligence is represented by a single number. (We all know about the
apocryphal philosophy professor who was brilliant when it came to thinking deep and
abstract thoughts, but who couldn't work the blender at home without seriously injuring
himself. How intelligent is that, using our ordinary sense of the word "intelligence"?)
Problems can arise because, typically, the problem investigated in a study in the
social sciences is stated by use of the ordinary, not the operationalized term. Thus one
wonders whether a certain segment of the population (let's say, Judo players, since Judo
is my favourite sport) are "more intelligent" than the norm. Then, to answer the question
the term is operationalized and that segment of the population is studied by applying the
operationalized term. I.Q. tests are administered to a sample drawn from that segment of
the population that plays Judo. Then, the conclusion reached by employing the
operationalized term is expressed by use of the ordinary term. Thus, one claims that Judo
players are "less intelligent" than the norm.
The obvious question to ask is whether the term as operationalized actually
measured what one was originally wondering about when one wondered how the
"intelligence" of that segment of the population compared with the norm. One can
imagine a defender of Judo players pointing out any number of other ways of
investigating this question that would have resulted in quite a different result. For
instance, taking the earlier mentioned "coping ability" part of the usual concept of
intelligence, it might well be that Judo players have various coping skills (quick thinking,
say ... you'd better be able to think fast in Judo, or you end up falling on your head!)
which are far better than the norm.
The moral for a consumer of social science research is to try to identify the
operationalized terms and consider whether the way they have been operationalized is
sufficiently faithful to the normal sense of those terms so that the research results are
significant for concerns expressed by use of those terms in their normal sense. If not, but
a claim is made using the normal sense of the word, the result can be an instance of the
fallacy of "persuasive definition."
It is important to note the ways in which values and interpretations enter social science
research. It is a mistake to think that such research just reports "hard facts." Some
philosophers argue that this is not a genuine difference in kind between the social
sciences and the natural sciences, but even they would admit that there is a marked
difference in the degree to which these factors intrude into social science research.
PHIL 145 LECTURE NOTES 131
Let's consider a few of the ways values and interpretations enter into social science
research.
1. The issue studied and the way of studying it may reflect values held by funding
agencies, and certain issues and ways of proceeding may be screened out because
they run counter to the values of funding agencies.
In some cases this reflects the fact that some people have an economic or
political interest in the results of this sort of research, and so they will fund research
likely to turn up results which further those interests. But that's not the only
possibility.
2. The scientist herself may have personal values that lead her to define an issue in a
special way, or to look above all for certain facts and not to look for others. This is
more problematic in the case of social sciences than in the case of the natural
sciences because the matters under investigation are particularly likely to be ones
that people will have deep feelings about. Some areas might be regarded as not
suitable for research, for instance, because of a researcher's ethical views.
3. The scientist (or the community of scientists to which she belongs) may have
certain theoretical commitments that dictate identifying the issue under
investigation in a certain way, that lead to picking out certain items as the especially
significant facts to attend to, and that shape the interpretation or identification of
those facts. If, for instance, a researcher has spent 35 years building a reputation by
conducting research from a "behaviourist" perspective, she is unlikely to conduct
experiments which move a long way from that perspective at the end of her career!
The conclusion to draw from these observations isn't that social science research is
inevitably biased or prejudiced. It is that there is a human and interested element in such
research. It is misleading to think that the reports of such research give us nothing but
hard facts.
We saw in Lectures 13 and 14 that there are always things you need to consider
carefully when someone is making claims based on samples. There are other problems
that need to be borne in mind when considering the results of studies involving
questionnaires, polls or interviews.
It is obviously important that the questions asked be properly framed. If the questions are
ambiguous, then some respondents may interpret them one way, some another. If you
have two different groups of people answering "yes" to some question, but for completely
different reasons, any conclusions you draw on the basis of the question will be
completely unreliable. (True or false: You can't keep too close an eye on politicians! 25%
say true because they think you mean that politicians can't be trusted. Another 10% say
true because they think all the recent conflict of interest requirements and so on keep
good people out of public life. But your survey only tells you that 35% said true. Of what
use is that number?)
PHIL 145 LECTURE NOTES 132
Another serious problem is that the questions might be framed so as to elicit a certain
kind of response, one different from the one that would have been elicited had the
questions been differently framed. Notoriously, if you ask people whether they want a
balanced budget they will say yes, but if you ask whether they support cuts to any
particular program they will say no. Similarly, it is easy to influence the responses to a
"who would you vote for" question by placing the question before or after other questions
which, for instance, highlight the accomplishments or embarrassments of a sitting
government.
The moral for the consumer of social science research is that often to know whether to
accept claims based on questionnaires, polls, or interviews we need to know how the
questions that were asked were framed, and we need to think about the possibility that the
results were skewed by the way in which they were framed.
Really, under this heading the advice could be summarized quite simply by saying that
one needs to be aware that there are often interests behind social science research, and so
you need to be vigilant in looking for the problems we have pointed to in other lectures!
But it is worthwhile to point to some of the characteristic kinds of sleight of hand people
might use.
A colleague of mine, Rolf George, jokes that the main fallacy students need to be warned
about is the "fallacy of lying". While a lot of the cases of misleading use of social science
results might not qualify as lies, strictly speaking, there is often some deliberate
deception going on when advertisers, politicians, or "think tanks" with a particular
political agenda are at work!
Persuasive definition. In a way, this can be looked at in the social sciences context as the
deliberate use of poor operationalization of terms. Here is a good example, again due to
Rolf George. People sometimes argue against public funding of university education
because it amounts to a "subsidy" from poor families to the children of rich and middle
class young people. The reasoning is that poor people are paying taxes, but it is
disproportionately the children of rich and middle class families who go to university.
However, to show that this amounts to a "subsidy", these people need to use that word to
mean any case where, in the very short term, group X is out of pocket and group Y gains
a benefit. But this has nothing to do with the real meaning of the word "subsidy", since
by that reasoning the bank gave me a "subsidy" when they gave me a mortgage, even
though I am repaying it with interest. And similarly, people who get university degrees,
as a group, pay far more back into the tax coffers because of taxes on their higher
incomes than taxpayers put into their education. So this looks rather more like an
investment by taxpayers than a subsidy.
The lesson here is to be on the lookout for misleading use of words like "subsidy"
or "handout" which carry a negative emotional load, and watch carefully for whether the
way they are operationalized fits that negative meaning.
PHIL 145 LECTURE NOTES 133
Careful selection of facts: A Canadian Prime Minister notoriously spent several months
trotting out the following "factoid": In a certain period of time, the Canadian economy
created more jobs than the economies of Britain, France and Germany combined.
This sounds very impressive, and taken literally it was true. However, the following
facts were also true: The British economy created more jobs than the Canadian economy
in that period of time, and so did the French economy. The reason all these could be true
was that the German economy at the time (shortly after the reunification of East and West
Germany) had in fact lost many jobs. It was only this large negative number which, when
added to the other two numbers, made the Prime Minister's statement true.
The lesson is to look very carefully at exactly what is said, and consider why the
person in question would have put things in just that way. Often thinking about this will
let you see that they are probably hiding some other fact which is less flattering.
"Facts" sometimes acquire a life of their own, even where there is no evidence for them,
or where that evidence is debunked. Have you heard about the 22 (Or was it 17? Or was
it over 40? Or was it 200?) different words the "Eskimos" have for snow? What about the
study which showed that a woman who wasn't married by the time she was 30 had a
better chance of being hit by lightning than getting married? Both have been shown to be
rubbish (The first is nowadays sometimes referred to as "The Great Eskimo Vocabulary
Hoax". The second is debunked in Susan Faludi's Backlash: The Undeclared War Against
American Women (New York: Crown, 1991.)) Unfortunately, it is much easier for these
things to gain a great deal of currency and continue to be reported as fact than it is to
convince everybody who has heard them from several different sources to stop believing
them! Both these examples were widely discussed (including many attempts to "explain"
the purported fact) in academic circles and in the popular press, so they have a status as a
sort of mistaken "common knowledge". (Other famous examples: Sweden's supposedly
very high suicide rate, and the inventor of the flush toilet being a man named Crapper.
The latter was reported to my Grade 7 class as a fact by our teacher.)
PHIL 145 LECTURE NOTES 134
Lecture 16
Arguments by Analogy
In this lecture we turn to another important sort of argument, namely argument by analogy. I
trust that you already know what an analogy is, and that you have run into uses of analogies
for various purposes such as their literary uses as a device for giving vivid descriptions, or
their scientific use for giving explanations ("think of the atom as like a mini-solar system,
with the nucleus at the center like the sun, and the electrons in orbit around it like planets").
What we want to get clear here is how analogies figure in arguments.
There is, first of all, the subject that we (or, at least, that the arguer) really cares about,
but which is either controversial or has features that are not known. We will call this the
primary subject. It is primary in the sense of being the one we are primarily concerned
with, i.e., it's the one we care about. (It needn't be the one that is primary in the sense of
having the most space devoted to discussing it directly!).
Secondly, there is a subject that we compare the primary subject to. This subject is better
known or clearer, or in some way is better understood than the primary subject, at least in
the estimation of the arguer. (If it were not, there would be little point in using this
comparison.)
This, you will not be surprised to find out, is called the analogue.
Remarks:
1. For the second type of argument by analogy, what I shall call "consistency
analogies", we will want to replace "likely to have" and "can also be expected to
be" by "should have" and "should be", but otherwise the structure will be the same.
2. Don't worry too much about the Xs in these premises. They are only a device to
indicate that while there needs to be some list of features that the analogue and the
primary subject have in common if there is to be an argument by analogy, there can
be any number.
Another way to present the structure of an argument by analogy would be to draw a chart
like the one below. We should understand the chart itself as expressing premise 4.
Obviously, an argument of this form can fail for any of the usual reasons.
If it is false that the analogue and the primary subject share the features cited by the
arguer, then at least one of premises 1 and 2 will be false. (It's a good idea to be aware of
the possibility of equivocation in making this evaluation. If someone begins an argument
by saying "My spouse is like a loaded pistol when she comes home drunk ..." you are
going to want to be pretty suspicious of the claim that one thing the analogue and the
primary subject have in common is that they're both "loaded"!)
Most bad analogies, though, fail because of a failure of premise 4. Often there are
relevant differences between the analogue and the primary subject which the arguer has
failed to take into account. Sometimes, the similarities listed, while relevant, are simply
not sufficient to give us reason to suspect that the primary subject, too, should have Z.
And finally, sometimes the listed similarities are not even relevant to the question of
whether one should think the primary subject would have Z, too.
Inductive Analogies
In the lecture about causal reasoning, it was mentioned that whenever someone conducts
an experiment on non-human animals, even if that experiment provides very strong
evidence that, for instance, a particular substance has harmful effects on those animals,
there is always a further argument that is required before we can conclude anything about
the likely effects of that substance on humans. That "extra argument" is, in fact, an
argument by analogy.
PHIL 145 LECTURE NOTES 136
What we really care about, the reason these experiments are conducted, is that we want to
know what the effects of these substances will be on humans. Thus, humans are the
primary subject of the argument by analogy. Since there are moral problems with
experimenting on humans, these experiments are carried out on other animals (on the
assumption that the moral worries are smaller in that case).
Suppose the experiment in question was one which showed that a particular substance
was a positive causal factor for tumors in the reproductive organs of rats. Then the
relevant similarities one could list would have to do with the similarities of the
reproductive systems of rats and humans. The Z, the characteristic of rats that we are to
infer is also in place in humans, would be something like "high dosages of substance S
are a positive causal factor for reproductive tumors in humans."
Now, animal studies are often attacked for a couple of different reasons. It's not normally
the case that anyone seriously questions whether a well done animal study shows that, for
instance, high doses of S cause cancers in rats. Instead, there are two main attacks. The
first is that there is some problem caused by the fact that very high doses of S are
typically used in these experiments. The second is to argue that, "Hey, rats are not
humans after all, and so the fact that something causes cancer in rats is not really grounds
for worry." It's worth looking at both these attacks.
Let's consider the second sort of response first. Obviously the attack in this case is saying
that condition 4 is not met. And obviously rats and humans are very different sorts of
creatures. Why are animal studies taken so seriously, then, when public policy is decided
about whether substances should be on the market? Part of the reason is this: of all the
substances known to cause cancer in humans, all of them also cause cancer in rats, and
several substances first discovered to cause cancer in rats were later found to also cause
cancer in humans. In other words, we have inductive support, i.e., scientific evidence,
that the similarities we can list between rats and humans are in fact the ones which are
relevant to the question of whether or not substances will cause cancer. So unless we can
actually point to some reason to think that substance S is relevantly different from the
substances known to cause cancer in humans, simply objecting that rats are different from
humans doesn't amount to pointing to any relevant difference for the purposes of this
argument.
As for the other sort of objection, it is quite right that the correct conclusion of our
argument by analogy is that substance S in large dosages is probably a positive causal
factor for cancer in humans. The problem, then, is not one with the argument itself.
Rather, what is being objected to is the move from the conclusion of the argument by
analogy to the further conclusion that substance S would also be a causal factor for
cancers if it were taken at lower dosages.
This is an important argument to consider. After all, the human dosage that
corresponds to the amount of a substance fed to rats in these experiments would often be,
say, 800 bottles of pop a day. There are obvious practical reasons for not conducting
experiments on rats using 1/400th of the dose given in these experiments — for instance,
one would expect the rate of cancers caused to be lower with these dosages, and so
experimenters would need to conduct experiments with thousands of rats to get
statistically significant results; instead, they opt for higher dosages and fewer rats.
PHIL 145 LECTURE NOTES 137
The justification for this strategy is that the number of cases of cancer caused by a
substance will be roughly proportional to the dosage. Doubling the dose should double
the number of cancers, so if the rats are getting 400 times the typical dose of substance S,
then one can calculate what rate of cancers could be expected in rats who received only
the a normal dosage by dividing by 400. And similarly in humans. The current objection
is really to this assumption of proportionality.
Once again, it is empirical evidence that shows the mistake in this way of thinking.
There is considerable evidence for various substances which shows that it is often a
justified assumption. So, in the absence of some positive reason to think that substance S
is different from these other substances in this respect, this present objection isn't a good
one, either.
(As a final aside, it's probably worth noting that the claim that "everything gives
you cancer" and that "anything will give you cancer if you take it in high enough
dosages" are simply not true. Many substances have been fed to rats in very high dosages,
and only a relative few have been shown to cause cancer.)
Of course, not all arguments which make use of inductive analogies have to do with
experiments on animals. For instance, if someone were to argue that it made no sense to
ban ownership of handguns because during prohibition the banning of booze didn't
significantly reduce drinking, that would be an argument by inductive analogy.
The missing conclusion is that banning ownership of firearms would not
significantly reduce murders and robberies. Firearms is the primary subject. Booze is the
analogue. The similarity is that booze was banned, while firearms might be banned. The
to-be-inferred characteristic (i.e., Z in the above schemas) is that banning booze did not
significantly reduce drinking.
How good is this argument? Note that the argument doesn't mention any resemblances
between booze and firearms, other than being banned, to sustain the analogy. No doubt
one could think of others. But even if they were supplied the same critical question would
apply: Are firearms and booze sufficiently alike to warrant the prediction that banning
firearms wouldn't significantly reduce murders and robberies (use of firearms) on the
evidence that banning booze doesn't significantly reduce consumption of booze ("use" of
booze)? That is, is premise 4 satisfied?
Probably not, I would say. There are important differences between banning booze
and the conditions under which firearms would be banned. During prohibition, people
were able to make their own alcohol, or to get it from friends who made it (for example,
in their bath tubs). By contrast, guns are not so easily made in your basement workshop.
Also, booze is addictive whereas possessing firearms is not. The relevance of this is that
addicts have an extra incentive to circumvent a law that would make the substance they
are addicted to inaccessible. Finally, for many people the desire for a firearm is a
consequence of the belief that others have them; one wants a firearm for protection from
those others. By contrast, the desire for booze isn't to anything like that extent a function
of others' possession of booze.
It's probably also worth noting that it's not obvious that the premise which claims
that banning alcohol didn't significantly affect consumption is acceptable. Since drinking
at the time was illegal, official statistics about consumption are of course unreliable, but
from the well-established fact that many people continued to drink during prohibition it
doesn't follow that consumption didn't go down considerably.
PHIL 145 LECTURE NOTES 138
Consistency Analogies
The second type of argument by analogy is one which applies in areas where it is
important that similar cases be treated similarly. This is particularly important when
dealing with questions of morality, legal questions, and other cases where precedent is
important.
There is no real structural difference between this sort of analogy and an inductive
analogy. Once again, the arguer cites similarities between the primary subject and the
analogue, at least implicitly claims that these similarities are both relevant and sufficient
to justify the insistence that the cases should be regarded as alike with respect to the to-
be-inferred characteristic Z, and suggests that the analogue indeed has Z.
"He got a bigger piece of pie than me! That's not fair! I'm your son, too."
The unstated conclusion, of course, is that my mother ought to have given the speaker a
piece of pie no smaller than the one she gave her other child. The appeal is to consistency
and the suggestion is that in the circumstances inconsistency is unjust. (As an aside, the
problem with this argument as it occurred in our family was usually with the truth of the
claimed premise that one child actually got a significantly bigger piece of pie than
another, but that's not the present point!)
What is worth noticing is that this argument has to do with what is fair or right, rather
than being an argument about what is the case, or what would be the case (if a certain
substance were consumed by more people, e.g.). This is a characteristic difference
between consistency analogies and inductive analogies.
Along the way we've already indicated some of the sorts of things that can go wrong with
an argument by analogy. Such an argument has, according to our scheme, four premises
and a conclusion. Any of the premises can be false, but it's usually number 4 that's the
culprit in a bad argument of this sort. But we can say a little bit more about common
problems by discussing some common fallacies in this sort of reasoning.
Our first fallacy can occur in either the case of an inductive or a consistency analogy.
Faulty analogy: This label really doesn't apply to a "fallacy" in the sense of the word we
have been using. It's not really a seemingly plausible pattern of reasoning at all. Instead, it's
a name for arguments which are obviously attempts at arguments by analogy, but the
analogy is an extremely loose one that is (at least pretty close to) irrelevant to the to-be-
inferred characteristic. Unfortunately, arguments by analogy seem to invite people to
construct wildly fanciful arguments, so this label applies more often than you might expect.
PHIL 145 LECTURE NOTES 139
The remaining fallacies apply especially to consistency analogies. Notice the following
feature of consistency arguments. If the analogy if a good one, what it establishes is that
the analogue and the primary subject need to be treated in the same way with respect to
the to-be-inferred characteristic Z, on pain of inconsistency. And usually it's pretty clear
how Z should be treated in the analogue case. But notice that it remains an option for
someone to say, "Fine, you have shown that we need to treat these cases in the same way.
But the lesson we should draw is that we have been thinking of the analogue case in the
wrong way all along. To be consistent we will need to treat it differently in the future".
This response, too, will restore consistency.
Two Wrongs: The problem with an argument which commits the two wrongs fallacy is
that, while it is supposed to be an argument for consistency, it ends up advocating that we
treat the analogue and primary subject inconsistently. When people commit this fallacy,
they argue that one thing is not (so) bad, because, for instance, nobody complained about
something else which was worse. But in such a case they are presuming that the analogue
is indeed bad. If the cases are indeed alike, we must either say the analogue was not bad
(in which case why should anyone have complained about it?), or that the primary subject
is bad (and so the person they are directing their argument against would presumably say,
"But that's what I've been saying all along!").
Slippery Precedent: The flaw here is a sort of converse to the flaw with two wrongs
arguments. This fallacy occurs when someone says, "Well, I should do X for this person,
because her claim is justified, but if I do it will set a precedent so that many others
without justified claims will be pounding down my door expecting me to do X, too!"
Here the flaw is clearly this: precedent requires treating like cases alike. But if we
say the first person has a justified claim and the others who will be banging down the
door do not, then that is precisely to say that these cases are relevantly dissimilar. There
is thus no obligation to treat them similarly. While two wrongs advocates treating
relevantly similar cases in dissimilar ways, slippery precedent tries to argue for an
obligation to treat relevantly dissimilar cases in similar ways.
You will notice that we have already dealt with what the textbook calls the fallacy of
slippery assimilation in a different way in our discussion of formal logic. You should
think of problems involving vague terms in whichever way you find easier to understand.
PHIL 145 LECTURE NOTES 140
Lecture 17
In this final lecture, we have a few remaining points to cover. First, we have one more
sort of non-deductive argument to consider. Secondly, we will have a bit more to say
about counterconsiderations. Finally, I will make a few remarks about what I hope doing
the work involved in this course will have given you.
Conductive Arguments
Consider what happens when you have to make an important decision, and there is some
time pressure. For instance, your phone rings. It's the chair of the board of directors for a
big bank. Their profits have been sliding, so they want to bring you on board as the CEO
for $2 million per year. You have one hour to decide whether to take the job.
What do you do? Well, all that money would certainly be nice, as would the
prestige. And hey, some new clothes, too, would be great, considering the holes in the
elbows of your shirt! But what about the long hours? They don't give these people a
couple million bucks for not working, after all. And what would all your friends think,
considering that you have spent a lot of time lately complaining about the obscene profits
the banks are making, how poorly they treat small business, and so on. ...
Well, you do your best to weigh up the considerations, pro and con, for taking the
job. When the phone rings again, you give your answer. But you know from past
experience that whatever answer you give, there is always the possibility that right after
you have sealed your fate by giving your answer (and, say, calling up to quit your present
job) that you might well think of something else which makes you say, "If only I had
thought of that before, I would surely have opted to keep my present job instead!"
This little story illustrates many of the features of the kind of reasoning that is typical when
we are faced with the challenge of deciding what to do, whether it is a question to do with a
personal decision like the one above, or it is a question of appropriate public policy, or
whatever. Indeed, this sort of argument shows up quite frequently in day-to-day living
where decisive evidence is for one reason or another unavailable. When someone uses the
sort of reasoning involved here to try to persuade someone (even herself) of something, it is
often called a conductive argument, or a balance of considerations argument.
Of course, as the name "balance of considerations" suggests, this is only part of the
story. Very often a conductive argument also includes a variety of counterconsiderations,
that is, a number of reasons which suggest that the conclusion is false. Nonetheless, the
fact that the argument is designed to show that the conclusion is true shows that the
arguer judges that the pro-considerations "outweigh" the counterconsiderations, and so
judges that on balance sufficient reasons are provided for accepting the conclusion.
In the imagined case of your job offer from the bank, we can reconstruct "your" pattern of
thought as though you had presented yourself with a conductive argument. In favor of
taking the job you might cite the extra income, the prestige, the fact that you'd always
wanted to have a house that needed a burglar alarm, and whatever else you regarded as an
important reason in favor of taking the job, while the counterconsiderations included
embarrassment in front of your friends for having to take back your earlier disparaging
remarks about banks, and so on. The fact that you took the job shows that you judged the
pro-considerations to outweigh the counterconsiderations.
But suppose that more than just your own judgment is in question. In particular, suppose
somebody else has presented you with a conductive argument. Then you are in a position
where you need to decide whether the argument presented by that person is good reason
for you to be convinced. (Perhaps now it is your spouse who has been offered the job,
and she or he is trying to convince you that you should be happy that you're moving to
Toronto so she or he can be a bank president.)
In evaluating conductive arguments we must first ascertain whether the separate premises
are acceptable and relevant. That is, each premise or independent line of reasoning states
a reason that is represented as supporting the conclusion and we need to ask whether the
claims made in stating that reason are acceptable and whether what is represented as a
reason for accepting the conclusion in fact supports the conclusion. Similarly, you will
want to consider whether the counterconsiderations are relevant and acceptable, where
now relevant means that they weigh against the conclusion. (If you're a particularly nasty
and superficial sort, you might say "Friends, schmiends, I can buy new friends for $2
million bucks a year," and so dismiss the opinion of your friends as irrelevant.)
A good procedure to follow is to first apply the acceptability and relevance tests to
the various convergent premises and simply eliminate from consideration all premises
that fail one or the other of these tests.
Only then does the difficult "balance of reasons" judgement come into play. The
task is to weigh the remaining proposed good reasons for the conclusion against the
counterconsiderations.
Very often this "weighing up" can require subtle powers of judgement. It isn't possible to
write a recipe telling how to make such judgements, but here are useful strategies.
3. Finally, our original example illustrates one more important feature of conductive
arguments. They are always subject to the possibility of other considerations being
relevant, but not having been thought of. Thus if you are evaluating somebody
else's conductive argument (or one of your own, for that matter), you should always
ask yourself if you can think of any other relevant considerations that would
change the balance between pro and con considerations.
Ultimately, though, once you have done all of these things, you will be in the same
position as the arguer. You will have various considerations on the table, and you too will
need to make a judgement on what these considerations mean for the acceptability of the
conclusion.
But one crucial difference between deductive and non-deductive arguments is this: if an
argument is valid, then the addition of further information can never lessen the amount of
support the premises supply to the conclusion. In the case of non-deductive arguments, it
is possible for further evidence to lessen the support a given set of premises supplies to
the conclusion.
Note that we are not just saying that it's possible to uncover evidence that a premise
is false. That can happen in the case of a valid argument, too. We are saying that the
amount of support supplied to the conclusion by the premises can be lessened by the
addition of another premise to the argument. Thus premises 1, 2 and 3 might make a
conclusion C seem quite probable, but premises 1, 2, 3 and 4 together make it somewhat
less probable. For example, if Rachel had her first child at a relatively young age and
breast fed her child, given our current thinking about the causes of breast cancer, this
gives some support to the idea that she is at low risk for breast cancer.
PHIL 145 LECTURE NOTES 143
The second argument is, in fact, not a good one! A family history of breast cancer is a
significant risk factor for that disease and so, even though premises 1, 2, and 3 are good
news for Rachel as far as her risk of breast cancer is concerned, the fourth premise of the
second argument means that we cannot infer from them that she is at a low risk.
Concluding Remarks
It was mentioned above that conductive arguments are particularly likely to occur when
the matter at hand is a decision about what should be done, or where it is a matter of
public policy about which there is significant debate. It's worth pausing to think about
why this is so.
PHIL 145 LECTURE NOTES 144
In the case of decisions about what to do, as in our primary example in the preceding
section, the decision must be made under time pressure. We are creatures with limited
lifetimes, after all, so there's an unavoidable time limit on everything we do, and many of
our decisions have much more immediate deadlines than our looming death. And for most
of them, the default option of failing to make a decision serves as a sort of decision itself. If
we don't give a yes answer to a job offer, that's essentially the same thing as saying no,
after all, so failing to make a decision in the time allotted is in effect deciding "no".
Because of these time pressures, we are in a position where we simply cannot wait around
until we're sure that we've considered all the relevant evidence. We have to work with the
considerations that occur to us in the time available and do the best we can.
In cases open to dispute, particularly matters of public policy, the reasons are
somewhat different. First, these are typically matters which involve questions about
values and morality, and so they are not matters which can be resolved merely by appeal
to scientific evidence. Scientific enquiries might well help us decide whether certain
premises are acceptable (i.e., likely to be true) or not, but they won't by themselves
answer questions involving values. Secondly, a deductively valid argument isn't going to
be of much help when the time comes to resolve a dispute of this sort. Such an argument
is, of course, the ideal if we can generate a deductive argument with acceptable premises.
But in a disputed area of public policy you're just not going to find one!
For instance, if someone were to argue that "Killing a human being is always
wrong, and abortion is the killing of a human being, so abortion is always wrong", he
would indeed seem to have presented a deductively valid argument (though recall what
we said about the difficulty of saying whether arguments which commit the fallacy of
equivocation are valid or not in Lecture 4: some would argue that this argument
equivocates on "human being", with the word meaning "person" in premise 1 and
meaning only "living creature with certain biological traits" in premise 2.). However, this
won't advance the debate very far, since it will simply move the debate from one about
abortion to one about, for instance, whether the first premise is acceptable. (What about
killing combatants during wars? What about pulling the plug on "brain dead" people to
"harvest" their organs for transplants? ...) The reason these matters are matters of
significant public debate is not that the participants in the debates are too stupid to have
seen a knockdown argument that is essentially sitting right in front of them. It is almost
always because there are significant reasons which can be cited by proponents of more
than one position on the issue.
So, in both the sorts of cases we have considered, the reason we are forced to adopt
a "balance of considerations" argument is because there is no prospect of coming up with
a more compelling form of argument in the circumstances.
Of course, saying this is not the same thing as saying that there are no rational grounds for
choosing between the different options. This is obvious in the case of our own decisions
which we later recognize were bad ones ("If only I had remembered!"). Saying that the
various participants in a public policy debate typically all have some good reasons in
support of their view by no means is the same as saying that they are all equally justified. It
is merely pointing out that it is a mistake, though an all too common one, to suspect that
people who disagree with us must be doing it out of bad will. Very often the advocates of
some views have simply failed to take other considerations into account. Sometimes they
just show bad judgment. But that's not the same as being deliberately evil!
PHIL 145 LECTURE NOTES 145
I think this last point is a particularly important point to bear in mind. I also think the
world would be a much more pleasant place if we could dramatically increase the number
of people who would bear it in mind.
My perhaps naively optimistic hope when I teach critical thinking courses is that students
who take the course could come away with two things. First (and this is the obvious goal
I have been explicitly touting throughout this course) I hope that studying critical
thinking can put students in a position where they have started to develop the skills they
need to see when arguments are being offered to them, and to sensibly evaluate them.
Secondly, though, my hope is that they will also acquire confidence in their own ability
to do so. The benefits of this confidence can be a willingness to take on the challenge of
confronting the arguments of those with whom one disagrees, and confronting the
possibility that one will have to change his or her mind. This, ultimately, is why I don't
like the idea that critical thinking is just "logical self-defense," a tool that allows you to
prevent yourself from being duped by evil advertisers and politicians who will lead you
astray if you don't have the skills to defend yourself. When you've got the skills to think
critically and the confidence that should come with them, you are in a better position not
just to defend yourself but also to open your mind to the possibility that you need to
change. It's not just a tool for defending yourself as you are now, but instead is a way to
make yourself better!