Critical Thinking
Critical Thinking
Acknowledgments
Preface to the 3rd edition
Glossary
Index
Acknowledgments
W eeveryday
dedicate this edition to the people who use critical thinking in their
work. Those we work closely with include healthcare
professionals, clinical ethicists, military leaders, legal analysts, risk
managers, and philosophers.
*****
A recertain
any of these claims true? How could you tell? Could you ever be
about any of them? Is there some method, some set of tools, to
help you decide whether to believe any or all of these claims? Thankfully,
the answer to the last question is yes, and this book is a beginner’s toolkit
for making such decisions—a starter guide to determining whether these
claims, or any others, deserve your assent. As with any basic toolkit, this
one doesn’t include every tool you might use as a careful reasoner, but it
has the most fundamental and widely used rules of good reasoning—
everything you need to start experiencing the power of critical thinking.
This book is designed to be read, not simply taught from. We attempt to
explain each concept simply and thoroughly with engaging examples so
you can improve your reasoning even if you aren’t taking a class. We also
provide many opportunities to practice applying a concept before moving
on, including additional real-life examples with specific, chapter-related
questions. Our goal is to help you use the tools of good reasoning
effectively in whatever context or career you find yourself. In other words,
we aim to help you think critically about your world.
This book is also intended to be a primary text for courses in critical
thinking, introductory logic, and rhetoric. We explain the basic concepts
covered in these courses using a variety of examples from a variety of
academic disciplines. We use real-life examples to translate abstract
concepts into concrete situations, highlighting the importance of clarity and
precision even in our everyday lives. In addition, the order of the chapters is
flexible to accommodate a range of teaching styles.
Thanks to the insights and questions of our students and the many people
who have used this book, we think the third edition is a significant
improvement on the first two, including more tools, more exercises,
expanded explanations, and more opportunities to practice thinking
critically about real arguments.
Jamie Watson
Robert Arp
Skyler King
part one
The basics of good reasoning
In
Chapters 1 and
2, we explain the basic concepts involved in good reasoning. The
two most basic concepts are claims, which are declarative
statements, and arguments, which are composed of claims and
intended to constitute support for the truth of a particular conclusion.
With just two major concepts, you would think it would be easy to
learn how to reason well. The trouble is, reasoning uses language,
which is notoriously messy, and it doesn’t take place in a vacuum.
There is usually a lot going on around an argument and behind the
scenes. For example:
Claims
Critical thinking is the conscious, deliberate process of evaluating
arguments in order to make reasonable decisions about what to believe
about ourselves and the world as we perceive it. We want to know how to
assess the evidence offered to support various claims and to offer strong
evidence for claims we are interested in. And we are interested in a lot of
claims:
• God exists.
• We should raise the minimum wage.
• The Red Sox will win the pennant.
• This drug will make me lose weight.
• This airplane will get me home safely.
• That noise is my carburetor.
Are any of these claims true? Should we accept them or reject them or
shrug our shoulders in skepticism? Critical thinking is the process of
evaluating the strength of the reasons given for believing claims like these.
As critical thinkers, we want to reduce the number of false claims we
believe and increase our fund of true beliefs. Regardless of what field you
study or what career you choose, the tools covered in this book are the
central means of testing claims, helping us to believe those likely to be true,
to discard those likely to be false, and to suspend judgment about the rest.
These tools have been handed down and refined through the centuries,
starting with ancient Greek thinkers such as Parmenides (b. 510 bce) and
Aristotle (384–322 bce), making significant advancements through the
insights of logicians like Gottlob Frege (1848–1925) and Alonzo Church
(1903–1995), and continuing to be refined today with the work of scholars
like Susan Haack (1945–) and Alexander Pruss (1973–).
This rich and extensive history of scholarly work has provided us with an
immensely powerful set of tools that has transformed a society of
blundering tool-makers into the high-precision, computer-driven world
around us. Every major technological advancement from ballistics to cell
phones, to suspension bridges, to interstellar probes is governed by the
fundamental principles of logic. And this book is an introduction to those
principle, a starter kit of those tools used to make our lives and our world
even better.
No one opens this starter kit completely uninitiated, with a mind that has
never reasoned through a problem or tried to convince others that
something is true. We have all argued, given reasons for a belief, defended
an idea, or searched for evidence. But we often do it poorly or incompletely
(even those of us who write books like this). Reasoning well is difficult.
Further, no one is free from prejudice or bias. We all have beliefs and
desires, and we all want to keep the beliefs about ourselves and our the
world that we already have, the ones that already seem true to us: Some of
us believe abortion is wrong; some believe arsenic is poisonous; others
believe a group of men walked on the moon in 1969; more than half the
people on the planet believe that a god of some kind exists; and some
people believe the statement that “more than half of the people on the planet
believe that a god of some kind exists.” But should we keep these beliefs?
Could we be wrong about them? And if so, how could we know?
Answering these questions is one of the primary purposes of critical
thinking. But what are we even talking about when we talk about “beliefs”
and “arguments” and “the world”?
Beliefs are mental attitudes toward claims (also known as propositions).
A mental attitude is a feeling about the world, and helps to inform our
reaction to it. For instance, you might have the mental attitude of fear,
where you dread an undesirable event. Or you might have the mental
attitude of hope, where you long for a desired event. There are many types
of mental attitude, including wishing, doubting, praising, blaming, and so
on. Belief is the mental attitude of assent to the truth of a claim; when we
believe a claim, our attitude is yes, that’s the way the world is. Of course,
we can have the attitude of belief toward a claim regardless of whether that
claim is true. When we were young, many of us had the mental attitude of
assent toward the claim: “Santa Claus brought these gifts.” Isaac Newton
had the mental attitude of assent toward the claim, “Space and time are
distinct entities.” In both cases, we are fairly certain these claims are false
(sorry kids). This shows that we can believe false claims.
A claim (or proposition) is a statement about reality, a declaration that
the world is a certain way, whether it really is that way or not. The way the
world is or was or could be or could have been or is not is called a state of
affairs. In declaring the way the world is, a claim expresses a state of
affairs. For instance, Rob could have been taller than he is. That is one way
the world could have been. Julius Caesar was emperor of Rome. That is one
way the world was. And snow is white. That is one way the world is.
Claims are either true or false because reality can only be one way: there
is either a tree in your backyard, or there isn’t; you are wearing a shirt, or
you aren’t; the United States has a president, or it doesn’t; two plus two
equals four, or it doesn’t. So, whether a claim is true or false depends on
whether the state of affairs it expresses is really the way the world is. Using
a classic example, we can say “The cat is on the mat” is true if and only if
the cat really is on the mat, that is, if and only if the world contains a state
of affairs in which there is a cat, there is a mat, and that cat is on that mat.
Of course, saying precisely how the world is is not always simple. What
if the tree is half in your back yard and half in your neighbor’s? What if the
cat is only partly on the mat? In these cases, we have to be careful to define
our terms. If by “on the mat” you mean “fully on the mat,” then a cat partly
on the mat makes the claim “The cat is on the mat” false. If, instead, you
mean “some part of the cat is on the mat,” then a cat partly or fully on the
mat makes “The cat is on the mat” true. We’ll say more about making
claims precise in
Chapter 2.
In this chapter, we will use brackets (< >) to distinguish claims from
states of affairs. For example, the state of affairs, snow is white, is
expressed by the claim, <Snow is white>. We will drop this notation in later
chapters after you are comfortable with the distinction between claims and
states of affairs.
It is important to note that claims are not restricted to a particular human
language. Particular human languages (English, French, German, etc.)
developed naturally, over many centuries and are called natural languages.
Natural languages are contrasted with artificial or formal languages,
which are languages that humans create for particular purposes, such as
computer languages (PASCAL, C++) and symbolic logic (we’ll introduce
one symbolic language, called “propositional logic” in
Chapter 4). Since we cannot communicate without a human language,
claims are expressed in natural languages. In this book, we are using
English, but the fact that we’re using English instead of French is irrelevant
to the claim that we are evaluating. When we want to point out that we are
expressing a claim in a natural language, we’ll use quotes (“… ”) to
distinguish it from a claim or a state of affairs. For example, the claim
<Snow is white> is expressed by the English sentence, “Snow is white.”
Another important point is that, while sentences express claims in a
natural language, sentences are not claims. This is because a single claim
can be expressed by many different sentences. For example, “All stuff like
this is white” or “It is white” can both mean <Snow is white> if “all this
stuff” and “it” both refer to snow.
Similarly, “Snow is the color of blank copy paper” can also mean <Snow
is white> if “the color of blank copy paper” refers to white. Similarly, the
English sentence, “Snow is white,” is not the French sentence, “La neige est
blanche,” yet both sentences mean <Snow is white>. So, <Snow is white>
can be expressed in English (“Snow is white”), German (“Schnee ist
weiss”), Italian (“La neve é bianca”), or any other natural language.
To see more clearly the relationship between claims, states of affairs, and
sentences, consider Figure 1.1.
Figure 1.1 Claims, States of Affairs, and Sentences
The state of affairs is the color of the snow out of which the snowman is
made. The claim that expresses this state of affairs is <Snow is white>. And
the sentences that express this claim can occur in any natural language (as
long as those languages have words for “snow” and “white,” or as long as
those words can be coined in that language).
Why do these distinctions matter? Remember, as critical thinkers, we are
interested in claims. Claims allow us to communicate about the world—to
ourselves, when we are trying to understand the world, and to others; we
think and speak and write in terms of claims. If you saw a snowman, you
can’t give us your experience of seeing a snowman. But you can tell us
about it, and to do so, you have to use claims. Now, when you tell us about
it, you will choose words from your natural language, but those words may
be more or less effective at telling us what you want to say about a
snowman.
For instance, if you said to a friend, “I saw one of those things we were
talking about,” you might be expressing the claim <I saw a snowman> and
you might be right (that might be a true claim). But if your friend doesn’t
remember what you two were talking about, she can’t evaluate your claim
—she might not remember that conversation. The phrase, “one of those
things we were talking about,” is an ambiguous phrase in English (see
Chapter 2 for more on ambiguity); it could refer to anything you have ever
talked about, including snowmen, rifles, mothers, or dogs. To know what
you are asking your friend to believe, and in order for her to evaluate
whether to believe you (you might have meant “alien life form” instead of
“snowman”), she needs to clearly understand what claim you are attempting
to express.
Given what we’ve said so far, we can define a claim as follows:
A claim:
(1) is a declarative statement, assertion, proposition, or
judgment,
(2) that expresses something about the world (a state of
affairs), and, because of (1) and (2)
(3) is either true or false.
Instead, a claim declares that something is the case, for example, that <The
world is only ten thousand years old> or that <Jupiter is the largest planet>.
(2) A claim expresses something about the world (states of
affairs).
Claims are assertions, declarations, or judgments about the world, whether
we are referring to the world outside of our minds (the existence of trees
and rock), the world inside our heads (what cinnamon smells like, whether
you feel jealous), or the ways the world might have been (it might have
rained today if the pressure had been lower, she could have gone with him).
And claims make precise all the various ways our natural languages allow
us to talk about the world. For example, Sally might say: “It is raining right
now.” And this means the same thing as: “It is the case that it is raining
right now” and “Rain is falling from the sky.” John might say: “I feel
jealous,” and this means the same as: “It’s true that I am jealous” or “I am
not okay with her dating him.” Consider these assertions, declarations, or
judgments where something is communicated that is or is not the case:
• accurately expresses some state of affairs, that is, the way the world is
(in which case, it is true), or
• inaccurately expresses some state of affairs (in which case, it is false)
Claims (e–h) are considered false claims, that is, what they communicate
does not accurately correspond to some state of affairs out there in the
world, and they have a truth value, false:
But claims can also be prescriptive or normative, that is, they can express
that the world ought to be or should be some way. For example, the
following are all prescriptive claims expressed by English sentences:
• <You should be paying attention in class right now.>
• <No one should be sleeping in class.>
• <We shouldn’t be at the beach.>
• <There should not be more than fifty people in this class.>
Prescriptive claims are common in law (<You should stop at stop signs>),
society (<You should stand when you shake someone’s hand>), religion
(<You should wear a hijab in public>), and ethics (<You should not hurt
people without just cause>). These categories of ought claims are often
referred to, respectively, as legal norms, social norms, religious norms, and
ethical norms.
Prescriptive claims need not refer to non-actual states of affairs. It could
be that the world is, in some ways, exactly how it should be. For instance, it
could be true that <You should be paying attention> and also true that <You
are paying attention>.
Where the ought comes from in prescriptive claims is controversial. In
legal norms, what makes them true (and therefore, gives force to the ought)
seems to come from a combination of the desire not to be punished and a
willingness on the part of the legal system to punish people for not
following the prescription. So, as a society, we have “created” legal norms
by agreeing to use force (and agreeing to have force used upon us) to
uphold them. (It’s worth noting that some people believe law is “natural”
and that humans discover legal norms derive from nature, but we will set
that aside for now.) But where do moral norms come from? What makes
morally prescriptive claims true?
These questions are beyond the scope of our discussion here, but they
highlight an important dimension of critical thinking: Different types of
claims are subject to different truth conditions. Natural claims are made true
or false by nature; legal claims are made true or false by lawmakers, judges,
and juries; your beliefs about how you feel (hot, cold, sad, angry) are made
true or false by your body; and so on. These differences in types of claims
and the question of what makes them true show the importance of learning
to think critically in specialized fields like law and ethics.
Getting familiar with … different types of claims
For each of the following English sentences (indicated by quotation marks),
identify whether it expresses a descriptive claim, prescriptive claim, question,
command, emotive iteration, or some combination.
Operators
Claims of every type can be simple or complex. A simple claim
communicates one idea, usually having just one subject and predicate, as in
the following examples:
• Metallica rocks.
• Jenny is in the garden.
• The new edition of Encyclopedia Britannica is out.
• Two plus two equals five. (Remember, claims can be true or false, so,
even though this sentence is not true, it is still a claim.)
The most basic operation is simply to deny the truth of a claim, to say it
is not the case. The not operator allows us to negate a claim. For example,
the claim <It is raining> is negated by denying it: <It is not the case that it
is raining>, or, more smoothly in English, “It is not raining.” A negated
claim is called a negation. Here are five more examples; the operator
appears in bold and the negated claim is underlined:
• Jenny is not lying. (Notice that the single claim <Jenny is lying> is
split by the operator. For the sake of clarity and simplicity, the
structure of natural languages often requires us to place operators in
places where a formal language like logic would not.)
• It is not raining.
• It is not the case that George W. Bush rigged the 2000 presidential
election.
• Hercules was not a real person.
Another simple operation is to say that two claims are both true, that is, to
say claim A is true and claim B is true. The and operator allows us to
conjoin claims in this way, and the result of applying the and operator is
called a conjunction. Each claim conjoined in a conjunction is called a
conjunct. Here are a few more examples; the operator appears in bold and
the conjuncts are underlined:
Notice that English allows us to include phrases unrelated to claims for ease
of expression. In (e), not only and also are extraneous to the claims <She is
a competent marksman> and <She is an expert>. Notice, also, that, in some
cases, implied content is part of the claims (e.g., the underlined content in
parentheses in (c) and (d)). See
Chapter 2 for more on extraneous material.
Two claims can also be disjoined, which is to say that either one claim is
true or another is. Claims that are operated on by the or operator are called
disjunctions. Each claim in a disjunction is called a disjunct.
Disjunctions are unique in that there are two possible ways to read them.
We might read a disjunction exclusively, which means either claim A is true
or claim B is true, and both cannot be true. This reading of the or operator
is called an exclusive reading, or the “exclusive or.” Alternatively, we
might read a disjunction inclusively, which means either claim A is true or
claim B is true, and both might be true. This reading of the or operator is
called an inclusive reading, or the “inclusive or.”
The exclusive or would be useful when dealing with mutually exclusive
disjuncts, such as <It is raining>, <It is not raining>. For any given
definition of “rain,” it is exclusively true that <It is raining or it is not
raining>. But exclusive truths are rare in common usage. In many cases,
both disjuncts could be true. For instance, consider the claims <The battery
is dead>, <The alternator is broken>. If a car won’t start, we might, after
some investigation, determine that <The battery is dead or the alternator is
broken>. The truth of either claim would be sufficient for explaining why
the car won’t start. But it could be that both are true: The battery is dead
and, at the same time, the alternator is broken. That may be improbable or
unlikely, but it is not impossible, which means we cannot use the exclusive
or.
Because in many cases, the disjuncts of a disjunction might both be true,
and because it is difficult to tell sometimes when two claims are mutually
exclusive, logicians have adopted the practice of treating all disjunctions
inclusively. We will follow that practice in this book: All or operators
should be read inclusively; both disjuncts might be true. Here are some
examples of disjunctions (notice that “either” is extraneous; it is simply
used to make the English sentences read more easily):
Some claims are conditional upon others. This means that we can
rationally expect some claims to be true if certain other claims are true (but
not only if they are true; they may be true for other reasons, too).
Conditional (also called hypothetical) claims employ the “if … , then …”
operator. This is because the truth of the claim following the “then” is
logically (though not causally) conditional on the claim following the “if.”
They are sometimes called hypothetical claims because they do not express
the way the world is, but how it could be. For instance, the conditional <If it
is raining, then the sidewalk is wet> does not express either that it is raining
or that the sidewalk is wet, but that (conditionally) if it is raining, then the
sidewalk is wet. This conditional claim can be true even if it isn’t raining.
Conditional claims have two parts: an antecedent, which is the claim
following the “if” (recall that “ante” means “comes before,” as in poker)
and a consequent, which is the claim following the “then” (think: con-
sequence, which literally means “with, in a specific order”). English allows
a variety of ways to express conditional claims, many of which do not use
the “if…then…” formula explicitly. For instance, “The moon will look
bright tonight if the sky is clear” expresses the conditional <If the sky is
clear, the moon will look bright tonight>. The antecedent comes after the if;
even though it does not come first in the sentence, it is still logically prior to
the consequent. English makes things more difficulty with the phrase “only
if,” which acts like the “then” in a conditional. So, if we said, “The moon
will look bright tonight only if the sky is clear,” we would be expressing the
conditional, <If the moon will look bright tonight, then the sky is clear>.
The antecedent comes before “only if” in English sentences.
Here are a few examples:
Now, notice that this conditional is also true if we swap the antecedent with
the conditional:
So far, we have applied operators to simple claims. But the rules of logic
allow us to apply operators to any claims, regardless of whether they are
simple or complex. So, if we conjoin these two conditionals (we say they
are both true), we get this very cumbersome complex claim:
• If Pat has brown hair, then she is brunette and if she is brunette,
then Pat has brown hair.
If we operate on two complex claims, we get another complex claim. The
result of conjoining two claims, regardless of whether they are simple or
complex, is a conjunction. The result of disjoining two complex claims is a
disjunction, and so on. So, our complex claim about Pat is a conjunction (it
conjoins two conditional claims).
The bi-conditional is helpful because, in cases where the conditional runs
in both directions, as in our conditionals about Pat, we can simplify the set
of conjunctions into a more manageable complex claim:
Major Operators
Finally, when operators are applied to complex claims, one of the operators
becomes dominant; it takes over the truth-functional role of the claim, and
the dominant operator is called the major operator. For example, if we
apply a negation to the complex claim (A & B), the negation is the major
operator: ~(A & B). Whatever truth value (A & B) has, the negation
changes it; the negation operates on the conjunction, not the simple claims.
Similarly, if we conjoin two conditionals (A ⊃ B) and (B ⊃ C), the result is
a conjunction: ((A ⊃ B) & (B ⊃ C)). The conjunction operates on the two
conditionals, and therefore, determines the truth value of this complex
claim. We will say more about major operators in
Chapter 4. For now, just be aware that, when a complex claim includes
more than one operator, the major operator determines whether the claim is
a negation, conjunction, disjunction, conditional, or bi-conditional.
These are the five basic logical operations you will use in thinking
critically about the world. In
Chapters 4,
5, and
6, we explain how these operators affect the truth values of claims and how
they are used in arguments. For now, here are some practice activities to
help you get familiar with these concepts.
1. “If the case goes to trial, then the lawyer gets his day in
court and we get to tell the whole story.”
2. “If it rains tomorrow, then we can either go to the mall or
hang out here.”
3. “He will arrive and she will arrive shortly thereafter if and
only if the dog stays and the cat goes.”
4. “There is no food left and we do not have any money.”
5. “There are three candidates and one of them is qualified if
and only if her previous employer verifies her work history.”
Quantifiers
In addition to operators, claims can also have quantifiers. A quantifier
indicates vaguely how many of something is referred to in a claim. When
we say “vaguely” here we mean that the quantifier tells us the quantity of
something but not a specific quantity, and there are three basic quantifiers:
all, some, and none. For example, if you want to express a claim about
every single bird, let’s say, that it has feathers, you would use the all
quantifier:
If you want to say that at least one bird has feathers or that a few of them
do, that many of them do, or that most of them do, you would use the some
quantifier:
The quantity of something in a claim is also called the scope of that claim,
or, in other words, how much of reality is included in the claim. The
relationship between quantifiers and scope can be seen in an analogy with a
literal viewing scope (Figure 1.3). The “scope” of the viewing scope refers
to what you can see when looking through it. The quantifier in a claim tells
you what you can see when looking through the scope.
Figure 1.3 The Scope of a Claim
Emotions as Evidence?
You may be wondering why we did not include emotions in our discussion
of evidence. “Go with your gut” and “Listen to your heart” are common
refrains in contemporary culture. And surely, you might think, the fact that
we feel disgusted by something is evidence that it is disgusting. Or you
might think that because we feel sympathy or a desire to help someone that
is evidence that some situation is sympathetic or that that person deserves
our help. You are in good company if you agree. The philosophers David
Hume and William James gave prominent places to emotion in our rational
lives.
While emotions play an important role in bringing important things to
our attention (like predators and con artists), and for encouraging us to
reconsider some beliefs (for instance, that texting your ex when drunk is a
good idea), we have strong reasons to believe our emotions are unreliable
as reasons for thinking claims are true. We often feel sympathy regardless
of whether someone deserves it (jurors are often made to feel sorry for
guilty defendants in order to convince them to acquit). We often feel a
desire to help someone when we shouldn’t (when a child is begging not to
get an injection of penicillin to cure a bacterial infection). We feel jealous
when we have no right to be. We are often hypersensitive, hypercritical, and
our feelings about a situation are often disproportionate to the event that
caused them. The Enlightenment thinker Michel de Montaigne writes, “Our
senses are not only corrupted, but very often utterly stupefied by the
passions of the soul; how many things do we see that we do not take notice
of, if the mind be occupied with other thoughts?” (from Apology for
Raymond Sebond).
To be fair, emotions are not always or completely irrational; sometimes
they are simply arational (i.e., they are neutral with respect to rationality).
David Hume argues that, without emotions, we wouldn’t get off the couch
and do anything—we simply wouldn’t care. The interest we take in
anything is a function of emotion. This means that emotions are integral to
critical thinking; we must have an interest in responsible and true beliefs to
motivate us to exert the energy required to think clearly about them. But a
motive to do or believe something is not an indication of that thing’s
goodness or truth. We may be motivated to give money to the homeless,
oblivious to whether this actually helps the homeless or whether it
perpetuates the problem. This means emotions are essentially arational.
Whether a belief or act is true or good depends on the way the world is, and
our beliefs about the way the world depend on our evidence. Even Hume
argues that our emotions should be guided by reason. Without a guide,
emotions, much like our senses, can deceive us. Thus, we can check our
emotions by testing them against our senses and reason, and we can test our
senses by testing them against reason. How do we test our reason? That’s a
bit tricky. And some basic points of reasoning we have to accept because
we can’t imagine any alternative (e.g., the deductive rules of inference in
Chapter 4). But also see
Chapter 10 for a discussion of how reasoning can go wrong and how we
might learn when it does.
For now, note some similarities and differences between sensory
evidence and emotions. Both sensation and emotion are passive experiences
—they happen to us, and we cannot decide to see, hear, touch, smell, taste,
or feel other than we actually do. Further, both sensations and emotions
occur independently of any judgments we might make about them. You can
judge that your jealousy is misplaced or irrational, but you can’t force
yourself to stop feeling jealous. You can believe that a sensory experience is
deceptive, even if you can’t stop seeing a deceptive image (optical illusions
like 3-D images are examples of this—you believe a surface is flat, but you
don’t see it as flat).
Sensations and emotions are distinct, though, when it comes to their
subject matters. Sensory experience seems to pull our attention outside of
ourselves to focus on what the world is like. If you see a red wall, you have
evidence that the wall is red. Emotions, on the other hand, tend to be
responses to sensory experiences, and they are more about us than about the
world. If you see your ex- with someone else, you may begin to feel
jealous. But your jealousy is not directly a judgment about the world
outside your mind; it is a response to a situation that you interpret as
painful. Jealousy reveals something about you, not the world. Maybe your
ex is simply having lunch with a co-worker or friend. Maybe they’re
helping a stranger and not in a romantic relationship with that person.
Emotion can motivate a judgment about your inner experience, but it is not
necessarily about the world outside that experience.
These distinctions are especially important when reasoning about moral
questions. We may come to feel sorrow at the way certain animals are
forced to live, for instance, caged in zoos without the freedom of their
natural environments. This sorrow might suggest that it is wrong to keep
animals in zoos. But upon reflection, we can recognize that some zoos are
incredibly humane, giving previously injured animals safety from predators,
medical treatment, and an ample food supply. Our sorrow, we come to
realize, is more about our interpretation of the situation than the situation
outside our minds.
Although emotions and sensory experience have a lot in common and
both can be mistaken, sensory experience seems to have a better track
record of presenting reliable information than emotions. For this reason, we
leave emotions off of our list of basic sources of evidence.
1. seeing red
2. feeling sad
3. feeling something sharp
4. feeling hot
5. touching something soft
6. smelling something burnt
7. tasting something sweet
8. feeling bitterness
9. hearing something hurtful
10. hearing a loud clanging
Arguments
In an argument, one or more claims are used to support the truth of another
claim. Support could mean several different things. It might mean logical
entailment, makes probable, is evidence for, or provides a good reason for.
Since this is a critical thinking book, we are interested in evaluating the
truth value of our beliefs, so we will talk about support in terms of
evidence, and allow that some evidence is logical, some is probabilistic, and
some includes reasons that are weaker than these.
As we have seen, claims can be either true or false, and evidence
suggests that some claims are true and others are false. So, there is a general
normative presumption that whether we should believe a claim depends on
how good our evidence for that claim is. Arguments help us organize claims
so we can see clearly how well our evidence supports a claim.
Any time we make a claim and then attempt to support that claim with
another claim or claims, we are, for better or worse, arguing. The ability to
construct arguments distinguishes critical thinkers from creatures who
depend solely on stimulus-response mechanisms for their rational capacities
—creatures like invertebrates, infants, and computers. To influence
invertebrates’ and infants’ behaviors, we have to engage in some form of
conditioning, whether positive or negative, operant or classical. We cannot
reason with them. The same goes for computers; computers must be
programmed, and they only do what they are programmed to do. But with
rational creatures, we can influence both behaviors and beliefs by appealing
to reasons; we can argue with them. We can do this well, using the
principles of good reasoning in the appropriate contexts with strong
evidence, or we can do it poorly, with rhetorical and logical fallacies.
Whether critical thinkers argue well or poorly, they are, nevertheless,
arguing. So, our definition of argument is as follows.
An argument is:
1. one or more claims, called a premise (or premises),
2. intended to support the truth
3. of a claim, called the conclusion.
“We all went to the store today. We bought candy. We went home
right afterward. We went to sleep. Brit slept the longest. Farjad got
sick.”
The word “therefore” (as we will see in later in this chapter) tells us that the
author intends the claim to be supported by the previous claims—it is not
simply supposed to follow them chronologically, it is supposed to follow
from them logically.
Since the person making these claims expects you to believe something
(specifically, <The fool is on the hill at midnight>) on the basis of other
claims (<The sky is blue> and <The grass is green>), this is an argument.
Of course, this is obviously a bad argument, even if you can’t yet explain
why it is bad. We will get to that. The point here is simply that bad
arguments are still arguments. We can identify them, evaluate them, and
show why they are bad using the tools you are learning in this book.
One of the main reasons to treat bad arguments as arguments is called the
principle of charity, which states that one should always begin an
argument by giving a person the benefit of the doubt. This is because what
sometimes seem like poorly supported or silly-sounding claims turn out to
be true! Think of the “silly” idea that the earth revolves around the sun (as
opposed to the opposite) or that germs spread disease (as opposed to
miasma). Both beliefs were largely dismissed until the evidence was too
strong to ignore. In addition—hold on, now—you might be wrong even
about things you believe very strongly. (We know. It’s hard. Take deep
breaths.) We’ve all been wrong before. And it might turn out that someone’s
argument for a claim that you don’t believe is much better than any
arguments you can come up with for your contrary belief. So, take people’s
claims seriously—at least until you evaluate them.
The principle of charity also says that one should try to give “the most
charitable reading to” someone’s argument (thus the charity part of the
principle) by making implicit claims explicit, clarifying unclear terms, and
adding any premises that would strengthen the argument. Since we are
finite creatures and do not know everything, it is not always obvious when
an argument is bad. In addition, just because a person disagrees with you
does not mean her argument is bad. Once you’ve made the argument as
strong as it can possibly be, then you can evaluate precisely how strong it
is.
The idea here is that you are only rational in believing a claim if you
have been responsible enough to consider the evidence against that claim.
This is a bit too strong; we rarely consider the evidence against going to the
grocery store when deciding whether we should (unless it’s rush hour or it’s
snowing) or the evidence against taking a flight to a conference when
deciding how to get there (unless it’s a holiday, or it’s just a few hours’
drive). Yet, in both cases, our beliefs are probably rational, or, at least they
are probably not irrational. Nevertheless, having good reasons is an
important feature of rationality, and the possibility that you are wrong is a
reason for adopting the principle of charity.
Another reason to adopt the principle of charity is that it will help you
refute your opponents’ claims when they aren’t well supported. If you are
not aware of your opponents’ reasons or have not considered them
carefully, you will be unable to respond clearly and forcefully when faced
with them. For this reason, we like to call the principle of charity The
Godfather Principle. In the film The Godfather: Part II (1974) the
protagonist, Michael Corleone tells one of his caporegimes (captains),
Frank Pentangeli:
The idea is that, if you do not know what your opponents are thinking, you
will not be able to respond adequately.
Part (2) of our definition of an argument also says that premises support
the truth of a conclusion. Not all reasons to believe a claim have to do with
truth, so some definitions of argument may leave this out. For instance,
some reasons might be practical reasons or normative reasons: It might
be better for you, or more useful to you, or more helpful for others to believe
something that is false. It is perfectly reasonable to call practical reasons
“arguments.” In this book, however, we are interested primarily in helping
you evaluate what the world is really like—we are interested in thinking
critically about what is true. And even practical reasons are subject to truth
questions: Is it true that one belief is more useful to you than another?
Similarly, normative reasons—claims about what should (morally or legally
or socially) be done, or what I ought to do—are true or false of the world.
So, for our purposes, an argument involves premises that are intended to
support the truth of a conclusion.
With those qualifications in place, consider an example of an argument.
Imagine you walk into your bedroom and find that the light bulb in one of
your lamps will not come on when you turn the switch. In this scenario,
let’s say that there are only two possible explanations for why the light bulb
will not come on:
With this in mind, you investigate to see whether the bulb is blown, for
instance, you try it in several different lamps. You might have instead
chosen to test the socket, say, with another bulb or a meter. But let’s stick
with the bulb. Suppose you try it in another lamp, and it works perfectly.
Now, by the evidence of your senses and what it means for a light bulb to
work properly, you are justified in believing:
Not on this evidence alone—it could also be that the breaker for that outlet
is tripped or that the socket is bad. To find out whether the socket has
electricity, you would need to know whether other, properly functioning
bulbs work in that socket.
So, from the fact that the bulb is not blown, you can conclude that it was
originally not getting electricity. But from the evidence that the bulb is
blown, you cannot conclude with any confidence that the socket has
electricity.
Here are two more examples of arguments. Note how the premises are
intended to provide support for their conclusions:
Premise 3: If John is taller than Harry, then John is taller than Sarah.
Identifying Arguments
Before we can evaluate an argument (the subject of the
next chapter), we must clearly identify it. Perhaps you recall writing
persuasive papers or giving argumentative speeches in high school or
college. In those papers and speeches, you had a thesis statement or topic
sentence you had to defend or explain. The thesis statement/topic sentence
was essentially your argument’s conclusion, while your defense/explanation
was your argument’s premises. Also, your teacher likely had you state your
conclusion first, then go on to support your conclusion. This is how you
will often find arguments structured in books, magazines, or newspapers.
But they are not always packaged this way. Sometimes, an author will leave
you hanging until the end, for effect. Sometimes, an author is careless, and
you can’t quite figure out what the conclusion is supposed to be. Since
arguments are packaged in many different ways, we need some tools for
picking out conclusions and premises.
The simplest first step in identifying an argument is to pick out the
conclusion. The basic goal of an argument is to convince or persuade others
of the truth of some claim. So, if someone made the above argument about
the light bulb and lamp, presumably they would want others to be
convinced or persuaded that the bulb is not getting electricity (for instance,
the maintenance superintendent of their building). The same goes for “Ticks
certainly are insidious.” The speaker making those claims wants others to
be convinced of their truth.
So, step one, ask yourself: “Is the author/speaker trying to convince me
that some claim is true?” If the answer is yes, you’ve found an argument.
The next step is to ask, “What claim, exactly, is this person trying to
persuade me to accept?” Remember, the claim you’re being asked to accept
is the conclusion.
Conclusions can often be identified by indicating words or phrases.
Consider this argument:
Daily cocaine use causes brain damage. Brain damage is bad for you.
Therefore, daily cocaine use is bad for you.
In this example, the word “therefore” indicates that the claim following it is
the conclusion of an argument. The claim <Daily cocaine use is bad for
you> is the conclusion. Not all arguments will have indicating words. For
example, if we remove the “therefore” from the cocaine argument and
change the order of the claims, as long as the speaker/writer intends it to be
an argument, it remains an argument and <Daily cocaine use is bad for
you> remains the conclusion:
Daily cocaine use is bad for you. Daily cocaine use causes brain
damage. Brain damage is bad for you.
But in this case, it is less clear that the author intends this as an argument.
Thankfully, there are often conclusion indicating words or phrases. Here are
some common examples:
Once you have identified that a set of claims is an argument and you have
clearly identified the conclusion, step three is to ask, “What claims are
being offered in support of the conclusion?” The claim or claims supporting
the truth of the conclusion are the premises. As with conclusions, premises
are sometimes accompanied by indicating words or phrases. For instance:
Because the water trail leads to the sink, the sink is leaking.
In this argument, the word “because” indicates that the claim immediately
following it is a premise. English grammar gives us a clue about the
conclusion. The sentence would be incomplete (a fragment) without more
information; it is a dependent (or “subordinate”) clause in need of an
independent clause to complete the thought:
Though <The water trail leads to the sink> is a simple claim and, by itself,
an independent clause, the addition of “because” changes the grammar and
lets you know it is evidence for something, which, in this case, is that <The
sink is leaking>. What claim are we supposed to accept? That the sink is
leaking. What supports this claim? The water trail leads to the sink. Here
are some additional premise indicating words and phrases:
Because; Since; For; For the reason that; As; Due to the fact
that; Given that; In that; It may be concluded from
“If Johnny were the killer, then there would have been powder
residue on his hands right after” (premise). “The police
checked his hands, and there was no residue” (premise). So,
“Johnny is not the killer” (conclusion).
Again, now you have a piece of reasoning that both of you
can evaluate.
Exercises
Real-Life Examples
Jen: In general, people rightly think it is wrong to kill humans who haven’t done
anything wrong. A fetus is simply an unborn human, so there is a burden of proof
that isn’t met when you say that abortion is permissible.
Abby: I don’t need a burden of proof; all I need is my life. My life would be infinitely
more complicated and difficult if I were forced by law to complete the pregnancy.
Jen: It sounds like you’re saying that your own inconvenience overrides a fetus’s right to
life. Does this hold in every instance? If it is less convenient for me to work twenty
years for a house, can I just kill someone who already owns one?
Abby: Of course not. I don’t mean that inconvenience justifies murder. I just don’t feel that
abortion is murder. I mean, for the first four weeks, an embryo is just a clump of
cells; it doesn’t look anything like a human being.
Jen: Let’s take that for granted for a moment. What do you say about abortion after four
weeks, when it does look like a human being, and actually invokes many parental
feelings (in both men and women) who see pictures of eight and ten-week-old
fetuses? These feelings tell us that the fetus is a valuable human being.
Abby: I say those people are just letting their emotions get the better of them. Just because
something looks like it is morally valuable, or makes you feel all fuzzy inside,
doesn’t mean it is. I think a woman has the right to do whatever she wants with her
body, including getting an abortion.
Jen: But doesn’t the law disagree with you now that Roe v. Wade has been overturned
by the Supreme Court? Is there still a “right” to abortion?
Abby: We cannot confuse law and ethics. That’s a red herring fallacy. Laws are supposed
to protect rights, but sometimes they don’t. If you remember, Jim Crowe laws that
required the separation of people of different skin colors in public were bad laws.
They violated rights. I think the Supreme Court was wrong in the case of Roe v.
Wade, and it doesn’t change the moral argument.
Jen: Okay. Even if that is true, you just said that a being doesn’t have a moral right to
life before four weeks because it doesn’t look like a human being with a right to
life. And anyway, a woman doesn’t have the right to do just anything she wants
with her body, like block someone’s driveway indefinitely or take off all her clothes
in a public place or sell her organs.
Abby: But you think fetuses are “people” with “rights.” How do you know?
Jen: When I see those pictures of aborted fetuses, I can just feel that it’s true.
2
Evaluating arguments
Extraneous Material
Extraneous material is anything in a passage that does not do rational
work in an argument, that is, words or phrases that do not play a role in a
premise or the conclusion. Extraneous material makes natural languages
rhythmic and rich and explains why Moby Dick is a classic and most dime
store romances aren’t. Consider this beautiful passage from the book of
Proverbs:
If you act wisely and diligently seek knowledge, then you will know
God.
If you are working through the night, then cover the windows to
prevent light from shining through.
Here are some common categories of extraneous material you will want to
remove when reconstructing an argument:
The author is trying to distinguish “career” from “job,” and she explicitly
qualifies the latter as “minimum wage job.” This seems strange: many
people who get paid by the hour make more than minimum wage. What
about people who get paid on commission? Would these types of jobs be as
attractive as a “career”? If not, does the author really mean to contrast
“hourly job” with “salaried job”? Who knows? But let’s stipulate for the
sake of argument. We’ll say that her distinction between a career and a
minimum wage job really comes down to the difference between a job with
a salary and a job that pays by the hour (also recognizing that jobs that pay
on commission are a different category entirely).
Now, take note of several vague terms and phrases that occlude the
meaning of this definition:
• high-performance
• postsecondary schooling
• potential for advancement
• high economic salary
• among other things
Here are some questions you might ask about these terms:
This quick analysis shows that this definition of “career” is useless. It is not
helpful for understanding what a career is, and it certainly does not help us
distinguish it from a “minimum wage” job.
If we cannot distill any plausible meaning from a sentence or passage, or
cannot distill any that the author would plausibly agree with, then we have a
reason to think the claim is actually unclear. When these claims are used in
arguments, whether as premises or conclusions, the arguments fail to be
good arguments. Their conclusions do not deserve your assent because
there is no clear relationship between the premises and the conclusion. We
will say more about evaluating arguments at the end of this chapter. For
now, beware of extraneous material, especially buzz words.
Getting familiar with … extraneous material
We immediately see that the arguer intends you to believe, “You are not
innocent.” But, of course, that claim is not explicit in the argument; it is an
implicit conclusion. Making the conclusion explicit, the argument would
look something like this:
1. If you are innocent, we wouldn’t catch you with your hand in the
cookie jar.
2. We did catch you with your hand in the cookie jar.
3. Therefore, you are not innocent.
“There are three ways out of here, and the first two didn’t
work.”
1. There are three ways out of here.
2. Two ways don’t work.
3. The third way works.
“FixIt Company was included in the exposé last night, and
all those companies are corrupt.”
1. FixIt Company was included in the exposé last night.
2. All the companies included in the exposé are corrupt.
3. Hence, FixIt Company is corrupt.
“If you can’t find a way out, there isn’t one. So, I guess we’re
stuck.”
1. If you can’t find a way out, there isn’t one.
2. You can’t find a way out.
3. Therefore, there isn’t a way out (we’re stuck).
“We knew that, if the screen was red, we could get out. So,
we got out just fine.”
1. If the screen was red, we could get out.
2. The screen was red.
3. Hence, we could get out (and we did!).
Enthymemic arguments are often used strategically to trip up your
reasoning. Sometimes the premise or conclusion is implicit because the
arguer doesn’t want to say it explicitly. If the arguer said it explicitly, it
might sound silly, or it might raise obvious questions that they don’t want to
answer, or you might challenge it. As a critical thinker, your goal is to
identify explicitly all of the premises and conclusions in an argument. In
doing so, you can evaluate the quality of the argument more effectively.
1. “Why should we have to pay for the plumbing bill? We rent this
place; we don’t own it. Plus, the toilet has been causing us problems
since we moved in.”
2. “You must believe she will win the election. Only a crazy person
would believe she won’t. Are you a crazy person?”
3. “Don’t you want a senator that shares your values? Candidate
Johnson shares the values of everyone in this community.”
4. “Isn’t insurance a good thing? Don’t you want insurance? How could
you possibly be against the government’s providing insurance for
everyone?”
In argument (1), the disguised claim is the conclusion. The question, “Why
should we have to pay for this plumbing bill?” disguises the claim: “We
should not have to pay for the plumbing bill.” In argument (2), the
disguised claim is a premise. “Are you a crazy person?” disguises the claim:
“You are not a crazy person.” We can make each of these arguments
explicit, as follows:
• “Why should we have to pay for the plumbing bill? We rent this
place; we don’t own. Plus, the toilet has been causing us problems
since we moved in.”
1. We rent this place, we don’t own it.
2. The toilet has been causing us problems since we moved in.
3. We should not have to pay for the plumbing bill.
• “You must believe she will win the election. Only a crazy would
believe she won’t. Are you a crazy person?”
1. Only a crazy person would believe she won’t win the election.
2. You are not a crazy person.
3. You must believe she will win the election.
• “Isn’t insurance a good thing? Don’t you want good things? How
could you possibly be against the government’s providing
insurance for everyone?”
1. Insurance is a good thing.
2. You want good things.
3. If anyone offers to provide insurance, you should not be against
his or her doing so.
4. Therefore, you should not be against the government’s
providing insurance for you.
[Note: This argument is also enthymemic; the arguer assumes
premise three is true but does not state it explicitly.]
13. “It’s not over until the music stops. And the band plays on.”
14. “Today will be a good day. Fridays are always good days.”
15. “I’m having a beer because it’s five o’clock somewhere.”
16. “If the server’s down, then we won’t have internet for a
week. It looks like the server’s down, so….”
17. “If Creationism were true, there wouldn’t be any vestigial
organs. Yet, what about the tailbone and appendix?”
18. “If evolution were true, there would be a complete fossil
record of all transitional forms. And I don’t of any
archaeologist, paleontologist, or geologist who claims the
fossil record is complete.”
19. “Islam is a violent religion. We know this because most
terrorists say explicitly that they are members of Islam.”
20. “Removing goods from the commons stimulates increases
in the stock of what can be owned and limits losses that
occur in tragic commons. … Therein lies a justification for
social structures enshrining a right to remove resources
from the unregulated commons: when resources become
scarce, we need to remove them if we want them to be
there for our children. Or for anyone else’s.”9
or it might mean:
Notice that specifying the type of travel in this example will not help. For
instance, I might have said, “I canceled my vacation to play golf.” Since
golf is typically a leisure activity, we might associate it with a vacation. But
I might still have meant, “I canceled my [family] vacation to play golf [with
my friends].” And if you knew I was a competitive golfer, playing golf
would not necessarily indicate an activity associated with leisure or
vacation.
Consider also, “The community has been helping assault victims.” Notice
that, in a case of syntactic ambiguity, the ambiguity does not result from
just one word, but from the way the words are arranged. We hope the
speaker means the community has been helping victims of assault and not
helping to assault victims. Also, imagine reading a novel with the line,
“The child was lost in the hall.” Out of context it is not clear what
happened. It might be that a child went missing in the hall, or that a child
died in the hall, or that a child in the hall could not find her way.
Since sentences can be arranged in a number of ways, there is not a set of
common syntactic ambiguities, but here are a few more examples.
Some Examples of Syntactic Ambiguity:
Vagueness
A word or phrase that’s vague has a clear meaning, but not precisely
defined truth conditions. A word’s truth conditions are those conditions on
which we could say for sure that the claim in which the word is used is true
or false. For instance, we all know what it means for someone to be “bald,”
but at what point does someone become bald? How many hairs can be left
and someone still be considered bald? The same goes for “dry.” Sitting at
my desk, I would consider myself dry, but that doesn’t mean there are no
water molecules on my skin. So what does it mean? How many water
molecules must I have on my skin before I am no longer dry?
Consider the word, “obscene.” In 1964, US Supreme Court Justice Potter
Stewart, speaking in his official capacity as Justice, said that he could not
define “obscenity,” “But I know it when I see it.” That a group of highly
educated, highly experienced men could not successfully define “obscenity”
is quite a testimony to the word’s vagueness. (Perhaps if there had been
some women justices?) But obscenity has a clear meaning; we all know
what it means to say something is obscene. But what actually counts as
obscene? That’s more difficult.
tall/short (Napoleon would be short today, but not when he was alive.)
(If you live in Montana, 80 miles is nearby; if you live in NYC, not so
close/far
much.)
(Schwarzenegger won Mr. Universe, but he’s got nothing on a grizzly
weak/strong
bear.)
soft/hard (I thought my dog’s fur was soft until I got a cat.)
(The painter Reubens would likely think today’s fashion models are
fat/thin
emaciated.)
(How many grains of sand does it take to make a “pile”? The world may
pile or heap
never know.)
Eliminating vagueness is not easy. You will almost always have to find a
different, more precise word. If you cannot find an alternative, you can
sometimes stipulate what you mean by a word. For instance, you might
begin, “By ‘pile’ I mean five or more grains of sand” or “I will use ‘fat’ to
mean someone who is more than twenty pounds over their ideal body
weight. And by ‘ideal body weight,’ I mean the result of the following
calculation ….”
Sometimes an alternative is not available, and stipulation will not help. If
this is the case, your argument will be inescapably vague. If you find
yourself with an inescapably vague argument, you may want to reconsider
offering it. It is unlikely that your conclusion will be very strong. If you’re
stuck with offering an inescapably vague argument, be sure to admit the
vagueness and allow that an unanticipated interpretation may strengthen or
weaken your argument.
Argument Form
After you have identified all the claims of an argument clearly, it is helpful
to organize the claims in the argument so that the premises are clearly
distinct from the conclusion. The typical way of organizing the claims of an
argument is called argument form or standard argument form. To put an
argument into argument form, number the premises, and list them one on
top of the other. Draw a line, called the derivation line, below the last
premise, and then list the conclusion below the line. If you look back
through this chapter, you will notice we have been using standard argument
form already, but here are two more examples:
In argument form:
In argument form:
These are not very good arguments, but they are arguments, nevertheless.
We will discuss what makes an argument good in the remaining sections of
this chapter.
Caution: In logic, claims are never valid or invalid. Only a deductive argument where
the conclusion necessarily follows from the premise(s) is valid. Claims are true or false;
arguments are valid or invalid. All inductive arguments are invalid.
Notice that, if these premises are true, the conclusion must be true. This is
not to say that the premises are true; it might turn out that some man is
immortal. Nevertheless, the argument is still valid (the premises still entail
the conclusion even if premise 1 turns out to be false). Validity refers only
to the relationship between the premises and the conclusion; it does not tell
you whether the premises or conclusion are true. This means that validity is
not all there is to a good deductive argument. In addition to validity, in
order to be a good deductive argument, the argument’s premises also have
to be true. If an argument is valid and has true premises, it is sound (see
Figure 2.1 at the end of this section).
Here’s an example of a deductive (and, therefore, valid) argument with
false premises:
If these premises are true, the conclusion cannot be false, so the argument is
valid. But the premises are not true, so these premises give us insufficient
reason to believe the conclusion. Therefore, the argument is unsound.
Validity without soundness can also happen when we have independent
reasons for thinking the conclusion is true:
Again, this argument is valid, but the premises are false, so the argument is
unsound. That doesn’t mean we don’t have any reason to believe the
conclusion is true (presumably, you have some evidence that all ducks
float), but that reason is independent of these premises—it comes from
somewhere else.
Of course, validity is much easier to test than soundness. For example,
the following argument is valid:
If the premises are true, the conclusion cannot be false. Are the premises
true? Premise 2 is uncontroversial. But it is very difficult to imagine how
we might test premise 1. Here is another example:
Again, the conclusion follows necessarily from the premises, but are the
premises true? Philosophers and physicists are still wrestling with both
claims. In this book, we will focus on evaluating validity in deductive
arguments. But don’t forget that an argument is good only if it is sound, so
it requires both validity and true premises.
1.
1. All Christians are monotheists.
2.
3. Therefore, all Christians believe in one god.
2.
1. …
2. All warm-blooded animals are mammals.
3. So, all cats are warm-blooded.
3.
1. If people are wicked, they should act wickedly.
2. People are wicked,
3. Hence, …
4.
1. Our Sun is a star.
2.
3. Our Sun is composed of helium.
5.
1. You’re either serious or you’re lying.
2.
3. Hence, you’re lying.
6.
1. All cats are mammals.
2. All mammals are warm-blooded.
3. So, …
7.
1. If it rains we stay, and if it snows we go.
2. It either rains or snows.
3. Therefore, …
8.
1. …
2. All ducks are green.
3. So, all ducks float.
9.
1. …
2. All scientists utilize the scientific method.
3. This shows us that all chemists utilize the scientific method.
10.
1. If it is round, then it is red.
2.
3. Therefore, all cars are red.
Inductive Arguments
The second major type of argument is the inductive argument. In an
inductive argument, the conclusion follows from the premise or premises
with some degree of confidence, but the conclusion does not follow
necessarily, as it does in the case of deductive reasoning. For example,
predictions about the weather based on atmospheric data are inductive.
Weather forecasters predict with some degree of confidence, but they are
sometimes wrong. Similarly, a doctor’s belief that a certain medicine will
cure your illness is an inductive inference based on what they know about
the medicine and what they know about your symptoms. Bodies differ, and
some illnesses have the same symptoms despite requiring very different
treatments. So, given your symptoms, a particular medicine is likely to help
you, but not necessarily.
Inductive reasoning sometimes discussed in terms of probability, and for
readers who like math or card games, this can be a very helpful way to
understand inductive reasoning. And for the rest of us, thinking about
induction in terms of probability can help us understand math a little better.
If we think of the conclusion in a deductive argument following with 100
percent probability from its premises, we can think of the conclusion of an
inductive argument as following with any probability less than 100 percent.
When kitchen cleaners, for example, say they kill 99.9 percent of bacteria,
they are saying, “This cleaner will very, very likely kill all the bacteria in
your kitchen, but we can’t be 100 percent sure.”
Probability is measured on a decimal scale between 0 and 1, where fifty
percent probability is P(0.5), 90 percent probability is P(0.9), and 100
percent probability is P(1). If a claim is zero percent probable, it is
impossible. If it is 100 percent probable, it is certain. “Certainty,” for our
purposes, applies only to valid arguments. There are different kinds of
probability, and we will discuss some of those in
Chapter 7. But for here, just note that we are talking about probability given
the evidence or relative to the evidence (which we will call epistemic
probability in Ch. 7). A conclusion can follow from premises with greater
or lesser degree of epistemic probability. If a conclusion follows from some
evidence with certainty, then the argument is valid. Remember that validity
means that, if the premises are true, the conclusion cannot be false—this
doesn’t mean the premises are true, only that the structure of the argument
guarantees the conclusion as long as the premises are true. If the premises
make the conclusion probable to some degree less than P(1), the argument
is invalid, and usually inductive.
Here are two examples of inductive arguments that use probabilities
explicitly:
A.
1. There is a 75 percent chance of getting caught in the rain.
2. If I get caught in the rain, I will be wet and miserable.
3. So, I will probably be wet and miserable.
B.
1. I will give you five dollars, but only if you flip this coin,
and it lands on heads twice in a row,
2. The probability of a coin’s landing on heads twice in a row
is about 25 percent.
3. If the probability is 25 percent, then it probably won’t land
on heads twice in a row.
4. Therefore, I will probably not give you five dollars.
Here are two examples of inductive arguments that do not use probabilities
explicitly:
C.
1. Most vegetarians are also in favor of welfare.
2. Nicolai is a vegetarian.
3. So, Nicolai is probably in favor of welfare.
D.
1. Some of the people in my class are working a full-time job.
2. Of those that are working a full-time job, 25 percent are
parents.
3. If you work a full-time job and you are a parent, you will
probably not do well in this class.
4. Joan is in this class.
5. Therefore, Joan will probably not do well in this class.
These arguments are not particularly strong, but they are inductive
nonetheless. A conclusion follows from a premise if, independently of any
other considerations, the evidence makes the conclusion more likely than
P(0). The question then becomes: When is it rational to believe an inductive
argument? How strong does the probability have to be? Is strength enough
to make an inductive argument good?
Very generally, if the degree is high enough relative to the premises, and
if all of the premises are true, then the conclusion likely or probably is true.
If the arguer achieves her goal and the conclusion follows from the
premises with a high degree of likelihood, the argument is strong. On the
other hand, the conclusion does not follow with a high degree of
probability, the argument is weak. If the argument is strong and the
premises are true, the argument is cogent (kō-jent). An inductive argument
is good only if it is cogent (see Figure 2.1).
It is important to understand why the strength of an inductive argument
has nothing to do with the truth of the premises. As we said with deductive
arguments, we do not mean that the premises are true, only that, if they are,
the conclusion is also likely to be true. However, in an inductive argument,
it is still possible that the conclusion is false even if the premises are true.
This means that all inductive arguments are invalid.
But just because an argument is invalid, this doesn’t mean it is a bad
argument. It simply means we cannot evaluate it as if it were deductive. We
need to see whether it is cogent. It may turn out to be a good inductive
argument, even if invalid. So, just as validity is not the only feature needed
to make a deductive argument good, strength is not the only feature needed
to make an inductive argument good. In both types of arguments, the
premises must also be true.
Consider an example. Jamie’s friend ordered a book from a certain book
shop, assuming it would arrive, as most books do, undamaged. Jamie’s
friend reasoned like this:
1. In the past, when I received a book from this book shop, it was
undamaged.
2. My current book order is similar to my past book orders,
3. So, I will probably receive this new book undamaged.
The conclusion is, “I will probably receive this new book undamaged.”
Provided the premises are true, the conclusion is probably true, but it is not
certainly true. It makes sense to conclude that his book would arrive
undamaged, given his past experience with the book shop. But the truth
concerning his success in getting a book in good shape in the past does not
guarantee that he will in the future receive a book in good shape. It’s always
possible that the book is damaged in shipping, or that someone at the shop
accidentally rips the cover without knowing it prior to shipping, so the
conclusion is merely probable or likely true. In order to let others know this
conclusion follows only with some degree of probability, and not with
certainty, we include phrases like, “it is likely that,” “probably,” “there’s a
good chance that.” In fact, Jamie’s friend did get the book in good shape,
but he needn’t necessarily have received it that way based on his premises.
Similarly, consider the kind of reasoning someone may have utilized just
before Joel Schumacher’s Batman and Robin came out in 1997. Because of
the wild financial success of Batman Forever in 1995, someone might have
concluded that Batman and Robin would be just as successful. No one
would bet their life on this conclusion, but the filmmakers risked millions
on it. Surprisingly—and to the dismay of the investors and many Batman
fans—Batman and Robin was a failure and heavily criticized by fans and
movie buffs alike. This is an example of inductive reasoning where it
seemed as if the conclusion was true but turned out to be false.
If either of these conditions or both is missing, then the argument is bad and
should be rejected. We can draw an analogy with the battery on a car. A car
battery has two terminals. A car will start only if cables are connected to
both terminals. If one or both of the cables is disconnected, the car won’t
start. The same goes for an argument: There are two conditions, and both
must be met in order to successfully support a conclusion. Consider the
following diagram of the battery analogy:
Figure 2.1 The Battery Analogy
A.
B.
C.
D.
Most Some
Strong claims?
Sometimes, people will say “That’s a strong claim you’re making!” and this
might suggest that our use of “strong” also applies to claims. But this use of
“strong” is very different. When people (often philosophers) say a claim is
strong, they mean the claim is very difficult to support. For instance, “All
gold found in nature is yellow.” Since we have never seen all the natural
gold, it is difficult to support the claim that all natural gold is yellow. When
people say a claim is weak, they mean the claim is fairly easy to support; it
might even be trivial. For example, “All the documented gold found in
nature is yellow.” To support this claim, we would only need to gather the
records documenting natural gold.
So, for our purposes, the adjective “strong” is reserved for arguments.
Claims are true or false; inductive arguments are strong or weak, cogent or
uncogent; deductive arguments are valid or invalid, sound or unsound; and
arguments are good or bad.
1.
1. Rover the dog bit me last year when I tried to pet him.
2. Rover has been castrated and has been much calmer in
the past three months.
3. Rover will bite me when I try to pet him today.
2.
1. Jones had sex with Brown’s wife.
2. Brown told Jones he was going to kill Jones.
3. Jones was murdered.
4. Brown is the murderer.
3.
1. Watches are complex, and they have watchmakers.
2. The universe is complex like a watch.
3. The universe has a universe-maker.
4.
1. The sign on Interstate 95 says the town is 1 mile away.
2. The town is, in fact, 1 mile away.
[Consider: Would the argument be stronger if we added the
premise: “Road signs are usually accurate”?]
5.
1. Rajesh loves Bobbi
2. Bobbi loves Neha.
3 Rajesh loves Neha.
6.
1. My friend knows a person of type X who is rude.
2. My sister knows a person of type X who is rude.
3. So, all people of type X are rude.
[Note how quickly such arguments motivate racism, sexism,
and ageism.]
7.
1. Almost all of the beans in this bag are red.
2. Hence, the next one I pull out definitely will not be red.
8.
1. The Tigers beat all of the other teams during the season.
2. The Tigers have the best overall stats of any team.
3. The championship game is about to begin, and all of the
Tiger teammates are in good health.
4. It is very likely that the Tigers will win the championship
game.
9.
1. The Tigers beat all of the other teams during the season.
2. The Tigers have the best overall stats of any team.
3. The championship game is about to begin, and all of the
Tiger teammates are in good health.
4. The Tigers’ quarterback just broke his leg.
5. But the Tigers will still probably win the championship
game.
10.
1. Paulo is a Democrat.
2. In the past, many Democrats have voted for bill X.
3. So, Paulo will vote for bill X.
Simple and Complex Arguments
In
Chapter 1, we introduced the basics of reasoning, which involved defining
claims, operators, quantifiers, evidence, and arguments. In this chapter, we
introduced some ways of teasing arguments out of their larger context and
distinguished two types of arguments. All of the arguments we’ve seen so
far have been simple arguments. A simple argument is an argument with
only one conclusion.
However, many arguments have two or more conclusions, and one
conclusion will serve as the premise for a further conclusion. We will call
arguments with more than one conclusion, complex arguments. Consider
the following simple argument:
As critical thinkers, you are not always only going to want evidence for the
truth of a conclusion, sometimes you will also want to (a) provide evidence
for the truth of the premises, or (b) use that conclusion in another argument.
In fact, most discussions and debates involve both (a) and (b). Here is a
complex argument that builds on the simple argument above:
1. It is raining.
2. If it is raining, then it is above freezing.
3. It is above freezing.
4. If it is above freezing, it is too wet to sled.
5. It is too wet to sled.
One important thing to notice about this complex argument is that there is
an ultimate conclusion, or final conclusion that the speaker wants to make:
“It is too wet to sled,” or, in more disappointing terms, “We can’t go
sledding.” In most complex arguments, there will be an ultimate conclusion
that all other arguments are leading to. This is the main point of the
conversation. It is easy to get distracted with the infinite variety of things
that can go wrong with an argument and the attempt to support claims with
evidence. But try not to lose track of the ultimate conclusion!
1. Grass is green.
2. The sky is blue.
3. The fool is on the hill at midnight.
What should we make of an argument like this? Clearly, it’s nonsense. But,
since it is being offered as an argument, we must (for the sake of charity …
ahem, The Godfather Principle) treat it like one. Using the strategy above,
and stipulating that there is no extraneous material, we can reason as
follows:
Is the argument valid? No, the premises do not guarantee the conclusion.
Is it at least strong? No, the premises do not strongly support the conclusion.
In fact, the conclusion follows with a degree of probability much less than
50 percent; it follows with zero probability. Given the premises, the
conclusion is 0 percent likely. (Its relative probability is 0 percent; this
doesn’t mean that its objective probability is 0 percent—we may have
independent evidence that it is more probable than zero—that is, assuming
we can figure out what it means.) Therefore, though the premises are
probably true, since the argument is weak, it is a bad inductive argument.
Exercises
Real-Life Examples
I. A Logic Problem
The following problem can be worked out individually or as a group
activity. Try it to test your deductive reasoning abilities.
Six friends—Andy, Ben, Carol, Dawn, Edith and Frank—are snacking
around a round table, and each had either a fruit or a vegetable and each had
only one of the following snacks: apples, bananas, cranberries, dates,
eggplant (a vegetable), or figs. From the information provided below, try to
figure out where each person sat and what snack they had. Try to construct
clues using complex claims and deductive arguments:
• The man to the right of the date-eater will only eat vegetables.
• Dawn is directly across from the eggplant-eater.
• Andy loves monkey food and is to the left of Dawn.
• The apple-eater, who is two seats over from Andy, is across from the
cranberry-eater.
• Frank is across from the fig-eater and to the right of Edith.
• Carol is allergic to apples.
• You would not like that coffee shop. The baristas do not understand
customer service. The atmosphere is mediocre. The Wi-Fi signal is
weak. And there are very few comfortable places to sit.
• “Removing goods from the commons stimulates increases in the
stock of what can be owned and limits losses that occur in tragic
commons. Appropriation replaces a negative-sum with a positive-
sum game. Therein lies the justification for social structures
enshrining a right to remove resources from the unregulated
commons: when resources become scarce, we need to remove them if
we want them to be there for our children. Or anyone else’s.”
2 David Schmidtz and Jason Brennan, A Brief History of Liberty (Malden, MA: Wiley-Blackwell,
2010), 6–14.
3 Lisa Fraser, Making Your Mark, 9th ed. (Toronto, ON: LDF Publishing, 2009), p. 3.
4 Lisa Fraser, Making Your Mark, 9th ed. (Toronto, CA: LDF Publishing, 2009), p. 7.
5 Quoted in Jay Dixit, “Decision ’08: Reading between the Lines,” Psychology Today, July 1,
2007, http://www.psychologytoday.com/collections/201106/read-my-lips-politispeak/between-
the-lines.
6 Ibid.
7 Steve Carrell as Michael Scott in the television show The Office, “Initiation,” Season 3, Ep. 5,
2006.
8 From the television show Better Off Ted, “Jabberwocky,” Season 1, Ep. 12, August 2009.
9 David Schmidtz, “Why Isn’t Everyone Destitute?,” in David Schmidtz and Robert Goodin,
Social Welfare and Individual Responsibility (Cambridge, UK: Cambridge University Press,
1998), p. 36.
10 In his dialogue Statesman, Plato writes that Socrates gives this definition of human (266e).
Diogenes Laertius tells us that, upon hearing this, Diogenes of Sinope took a plucked chicken
into Plato’s Academy and said, “Here is Plato’s [human]” (VI.40).
11 This example comes from George Mavrodes’s book Belief in God: A Study in the Epistemology
of Religion (New York: Random House, 1970), p. 22. If you can find it, we highly recommend
this as an excellent example of critical thinking applied to religion.
12 This was a “joint resolution” of the US Congress called “Authorization for Use of Military
Force Against Iraq Resolution of 2002.”
part two
Deductive reasoning
In
Chapters 3–
6, we explain three methods of deductive reasoning: categorical
logic (
Chapter 3), truth tables (
Chapter 5), and basic propositional logic (
Chapters 4–
6). Categorical logic helps us reason deductively about quantified
claims, that is, the all, some, and none claims we discussed in
Chapter 1 and to understand the various logical relationships among
these quantified, “categorical” (category-based) claims. Truth tables
show the logical relationships among the parts of complex claims.
They also help demonstrate the concept of validity, provide intuitive
tests for validity, and provide a foundation for understanding
propositional logic. Basic propositional logic is a more powerful
logical system than categorical logic, and it helps us to reason
deductively about a wide variety of non-categorical claims. We will
explain how to translate natural language into propositional logic and
tests for validity in propositional logic.
3
Thinking and reasoning with
categories
We explain the concept of a category, the four standard types of categorical claim using
the quantifiers we discussed in
Chapter 1, and the Venn diagram method for testing categorical arguments for
validity. We also explain the traditional square of opposition, which shows the
logical relationships between different types of categorical claim and discuss the
limitations of categorical logic.
Categories
Categorical logic is a form of deductive reasoning that allows us to
determine whether claims about categories follow from other claims with
certainty. In other words, we are able to identify whether any argument
formulated with standard-form categorical claims is valid or not (see
Chapter 2 for a refresher on validity).
There are certain inferences we can make with standard-form categorical
claims that are immediate. For example, it’s true that “No cats are dogs,” so
we know immediately that “Some cats are dogs” is false. Also, given that
“No cats are dogs” is true, we can immediately infer that “No dogs are cats”
is not only true as well, but it also communicates the same thing as “No cats
are dogs.” Categorical logic provides us with a set of tools that not only
explains why these inferences are valid but helps us reason about more
complicated categorical relationships. To begin, we need to know a bit about
what categories are.
Humans are natural classifiers, sorting all kinds of things into categories
so as to better understand, explain, predict, and control reality. A category is
a class, group, or set containing things (members, instances, individuals,
elements) that share some feature or characteristic in common. We can
construct a category of things that are dogs, a category of things that are
humans, a category of things that are red, a category of things that are left
shoes, a category of things that are red left shoes, and on and on. In fact, it’s
possible to classify anything you can think of into one category or another.
Consider this claim: “I think I saw a black cat sitting on the window sill
of that vacant house.” It is possible to categorize the various things in this
claim like this. There is the category of:
There is also the category of things that are that vacant house, which
contains only one thing, namely, that particular vacant house. There is the
one-member category of things that are me, which is doing the thinking.
An easy way to visualize categories of things is to draw a circle, which
represents the category, and then to write the members inside. Also, we can
use a capital letter to represent the category itself as in, say, C for the
category of cats, B for the category of black things, H for the category of
houses, and so on. Usually, people will choose the first letter of the word that
designates the category. Just be sure you know what category the letter
symbolizes.
Relating Categories to One Another
Not only do we categorize things, we make claims about the relationships
between and among categories. For example, consider the category of things
that are snakes and the category of things that are venomous. Now, think of
all the things that are snakes, such as pythons, boa constrictors, and vipers,
and all of the things that are venomous, such as jellyfishes, stingrays, and
spiders. In trying to represent reality accurately, we might assert that some
members of the category of snakes belong in, or are in, the category of things
that are venomous, which turns out to be true. Think of rattlesnakes or asps,
which are venomous snakes. Using circles again to represent categories of
things, little rectangles to represent the things or members in the categories,
and capital letters to represent categories, we can visualize the claim, “Some
snakes are venomous” like this:
Notice that there are other snakes that are non-venomous, like green snakes
and garden snakes, and there are other venomous things, like scorpions and
one of Rob’s ex-girlfriends (so he says—a little critical thinking humor).
Further, not only can we say that “Some venomous things are snakes”—and,
essentially, we are saying the same thing as “Some snakes are venomous”—
but it is also the case that both claims are true when we look at reality.
Therefore, the diagram expresses a claim, and, in this case, the claim is true.
The diagram we use is modeled after what are known as Venn diagrams,
named after the famous logician, John Venn (1834–1923). We will continue
to talk more about Venn diagrams as this chapter progresses.
We can also assert that two categories of things have nothing to do with
one another, that is, they have no things (instances, individuals, members,
elements) in common. For example, consider the claim, “No dogs are cats.”
Using circles again to represent categories of things, little rectangles to
represent the members in the categories, capital letters to represent
categories, and adding a shaded black area to represent a void or nothing, we
can visualize the claim “No dogs are cats” like this:
We’ll explain each type, give several examples of claims translated into each
type, and then show how each example can be represented with a Venn
diagram. To avoid confusion, and following classical presentations, we will
drop the plural “s” on As and Bs, and just write “All A are B,” which means
all things that are A are things that are B.
All / No / Some category of things are / are not some other category of things
All / No / Some things that are junk food are / are not things that are unhealthy for people
All things that are junk food are things that are healthy for people.
Some things that are junk food are things that are healthy for people.
Some things that are junk food are not things that are healthy for people.
No things that are junk food are things that are healthy for people.
So, how do we know which quantifier to use? There aren’t strict rules for
translating, but there are some useful guidelines. For instance, in “Junk food
is unhealthy,” does the speaker mean all junk food or just some? If you
cannot ask the speaker to be more specific, this is a judgment call. The
standard default is to treat it as all, since it seems to refer to junk food
generally. But this is not always the most charitable reading, since all is a
very strong quantifier that is difficult to justify. But for our purposes, treat all
claims without explicit quantifiers categorically (as “all” or “none”). We’ll
say more about translating as we go through the material.
The shaded area indicates that nothing in the category of lawyers is not in the
category of jerks; in other words, everything in the category of lawyers is
also in the category of jerks. Note that the diagram merely pictorially
represents the categorical claim—it does not indicate whether the claim is
true or not. We’re obviously joking (a bit) about all lawyers being jerks. We
know one or two lawyers who aren’t jerks.
Now consider the taxonomic ranking of organisms that is used by
biologists all over the world, the one that goes from domain, down to
kingdom, phylum, class, order, family, genus, and species. We know that
humans are mammals, and mammals are animals. We use A-claims normally
to speak about these relationships when we say, “All humans are mammals”
and “All mammals are animals,” and we can Venn diagram these claims like
this:
E-claim: No A are B
A standard-form categorical E-claim has this form: No A are B (there are no
things that are A that are also things that are B), and its diagram is drawn like
this:
Notice that the football-looking intersection between categories A and B is
shaded. This diagram expresses that there is no member of A (shaded area =
nothing) that is a member of B. In addition, saying that No A are B is saying
the same thing as No B are A, so you can switch the subject and predicate in
an E-claim and preserve the meaning, as well as whether the claim is true or
false.
Consider the E-claim “No women are one-thousand feet tall.” The Venn
diagram for this claim looks like this:
The shaded area indicates that there are no women in the category of things
that are one-thousand feet tall, and it just so happens that this claim is true.
Further, to say, “No things in the category of one-thousand feet tall are
women” is saying the same thing as, “No women are one-thousand feet tall,”
and it is, likewise, true. Finally, the Venn diagram for “No things in the
category of one-thousand feet tall are women” looks exactly like the one for
“No women are one-thousand feet tall.”
Considering again the taxonomic ranking of organisms, we know that
humans are a different species from dogs, and we might even state this using
an E-claim form, “No humans are dogs.” The Venn diagram looks like this:
Again, given that “No humans are dogs” is true, “No dogs are humans” is
also true, “No humans are dogs” is saying the exact same thing as “No dogs
are humans,” and the Venn diagram for both claims looks exactly the same.
In a nutshell:
Contrary claims are never both true.
Contrary claims can both be false.
If one contrary claim is false, it is undetermined whether the other is true
or false.
In a nutshell:
Subcontrary claims are never both false.
Subcontrary claims can both be true.
If one subcontrary claim is true, the other is undetermined.
Further, when you have a false I-claim or O-claim, you can infer the truth
values of all corresponding claims in the square. For example, we know that
“Some people are immortal” is false, so:
Conversion
Conversion is simply the process of taking one of the standard-form
categorical claims and switching the subject and the predicate:
The converses of E-claims (No A are B) and I-claims (Some A are B) are
logically equivalent to one another in that they preserve their truth value:
“No cats are dogs” is true and implies the truth that “No dogs are cats”; and
“Some snakes are black things” is also true and also implies truly that “Some
black things are snakes.”
However, the converses of A-claims (All A are B) and O-claims (Some A
are not B) are not equivalent to one another in that they do not preserve their
truth value: “All cats are mammals” is true, but “All mammals are cats” is
false, and, not incidentally, communicates something completely different.
Also, if “Some students on scholarship are not sophomores” is true, then we
can’t say whether “Some sophomores are not students on scholarship” is true
or not, and it communicates something completely different.
Obversion
Obversion is the process of taking one of the standard-form categorical
claims, changing it from the affirmative to the negative (A-claim becomes E-
claim and vice versa, and I-claim becomes O-claim and vice versa), and
replacing the predicate term with its complementary term (the term that
names every member that is not in the original category, or the “non” of the
term):
“All cats are mammals” is true and means the same as “No cats are non-
mammals”
“No cats are dogs” is true and says the same thing as “All cats are non-
dogs”
“Some snakes are cows” is false and means the same as “Some snakes are
not non-cows”
“Some snakes are not cows” is true and means the same as “Some snakes
are non-cows”
Contraposition
Contraposition is the process of taking one of the standard-form categorical
claims, swapping the subject and the predicate, and replacing both with their
complimentary terms:
All A are B becomes All non-B are non-A (the contrapositive of All A are
B)
No A are B becomes No non-B are non-A (the contrapositive of No A are
B)
Some A are B becomes Some non-B are non-A (the contrapositive of
Some A are B)
Some A are not B becomes Some non-B are not non-A (the contrapositive
of Some A are not B)
The contrapositives of A-claims (All A are B) and O-claims (Some A are not
B) are equivalent to one another in that they preserve their meaning and truth
value: “All cats are dogs” is true and says the same thing as “All non-dogs
are non-cats,” while “Some snakes are not black things” is also true and says
the same thing as “Some non-black things are not non-snakes.” (That’s a
mouthful and makes your head hurt … we know.)
However, the contrapositives of E-claims (No A are B) and I-claims
(Some A are B) are not equivalent to one another in that they do not preserve
their meaning and truth value: “No cats are dogs” is true, but “No non-dogs
are non-cats” is not only false, but also communicates something completely
different. And if “Some students on scholarship are sophomores” is true, then
we can’t say whether “Some non-sophomores are non-students on
scholarship” is true or not, and it communicates something completely
different.
Translation Tips
You’ve already had a little practice translating various English phrases into
categorical claims, but there are certainly more complicated English phrases
to translate. In this section, we will explain how to translate some potentially
difficult phrases.
“Most”
What happens when a speaker means most instead of just some? Most is a
strong quantifier, and we saw in the second chapter that the difference in
meaning between most and some can determine the difference between a
strong and a weak inductive argument. However, for the purpose of
categorical logic, most and some will look the same on a Venn diagram.
Since we don’t know exactly how many there are, the best we can do is
indicate that some are, recognizing that some is consistent with most. For
instance, if we know that most Labour Party members are liberal, then we
know that some are. Thus, for the purposes of categorical logic, “Most A are
B” is categorically equivalent to “Some A are B.” The same does not hold
true going the other direction. If we know that some Labour Party members
are conservative, we do not know whether most are. It is still possible that
most are, but we cannot tell just from know that some are.
“There are times when I am sad” becomes “Some times are times when I am sad”
“At no time should he be allowed in” becomes “No times are times when he should be allowed
in”
“Whenever I eat, I usually burp” becomes “All times when I eat are times when I usually burp”
“Every time I call her, she hangs up” becomes “All times I call her are times when she hangs
up”
“Ghosts don’t exist” becomes “No places are places where ghosts exist”
“Gravity is everywhere” becomes “All places are places where there is gravity”
“The keys are someplace” becomes “Some place is/are place(s) where the keys are”
“Here is where we are on the map” becomes “All places that are here are places where we are
on the map”
Conditional Claims
Conditional claims have the “If …, then …” format, as in “If it is a cat, then
it is a mammal.” The way to translate these claims is fairly straightforward:
They are translated as A-claims with the antecedent taking the “All A” spot,
and the consequent taking the “…are B” spot. So, the claim, “If it is a
bluegill, then it is a fish” becomes “All things that are bluegills are things
that are fish.”
This does not always work with events, though. For instance, the claim,
“If it is raining, then the sidewalk is wet” becomes “All events that are
raining are events in which the sidewalk is wet.” This does not express what
we would like, since the event of its raining is not the same kind of event as
the sidewalk’s being wet. Be cautious of categorical event claims. This is one
of the limitations of categorical logic.
Both premises are A-claims and it is fairly easy to see that, if the premises
are true, the conclusion must be true, so the argument is deductively valid.
But what happens when the quantifiers aren’t so clear? Consider this next
argument:
To evaluate arguments like this, we’ll use a method called the Venn diagram
method of testing for validity, named after logician and mathematician John
Venn. We can use the Venn diagram method to evaluate arguments that meet
exactly two conditions:
If an argument does not meet these conditions, the traditional Venn diagram
method cannot be used to test for validity. Some logicians have devised more
complex diagramming methods to test other arguments, but we will not
discuss them here.
Once you discover that your argument meets the two conditions of a
categorical syllogism, you can evaluate it using a Venn diagram. The Venn
diagram method has two steps:
D M
C M
2. All cats are meowers.
D C
In order to evaluate this argument with the Venn diagram method, begin by
drawing three overlapping circles, one for each category, D, M, and C. Then
diagram both premises:
For the first premise, we black out the space between D and M. C is affected,
but don’t worry about that yet. For the second premise, everything that is a
cat is pushed into M. Yes, part of M is already blacked out, but that’s okay.
That just means everything that is a cat is in the M circle, and not in the C or
the D circles.
Now that both premises are diagrammed, check to see whether the
conclusion (“No dogs are cats.”) is diagrammed. DO NOT diagram the
conclusion if it is not diagrammed. If the argument is valid, then the
conclusion will already be diagrammed.
Is the conclusion diagrammed above? Yes. The football-shaped area
between D and C is completely blocked out, indicating that there is no
member of the dog category that is a member of the cat category; thus, no
dogs are cats. Since diagramming the premises results in the diagram of the
conclusion, the argument is valid. If diagramming the premises does not
result in the diagram of the conclusion, the argument is invalid. Don’t be
thrown off by the fact that all of C and the rest of the football-shaped area
between D and M are shaded in. All that matters is whether the football-
shaped area between D and C is completely shaded, which it is.
Now consider this argument. We can Venn diagram it to check if it’s valid
or not.
M U
U B
M B
Now we ask whether the conclusion is diagrammed. In this case, the answer
is no. Even though some people with uteruses are bankers—indicated by the
X—we have no idea whether the X applies just to women with uteruses or
also people with uteruses who are mothers. This is indicated by placing the X
on the M line. Since the conclusion is not clearly diagrammed, this argument
is invalid.
Another way to think about this visually by looking at the Venn diagram is
like this: Given that half of the football-shaped area that intersects M and U
is not shaded, we must place the X on the line. Let’s do another diagram of a
valid argument to explain what is meant here.
Since the half of the football-shaped area that intersects M and B is shaded,
we must place the X in the part of the football-shaped area that is not shaded.
Thus, we can see from the diagram that “Some bankers have uteruses” is
indeed diagrammed, so that conclusion does in fact follow and the argument
is valid.
Let’s look at one more example:
In this argument, we cannot tell whether the rodents in the mammal category
are also included in the fin category—so we put an X on the line in the
football-shaped area where R and M intersect. Similarly, we cannot tell
whether the mammals in the fin category are also included in the rodent
category—so we put an X on the line in the football-shaped area where R
and F intersect. Since these questions are not answered by the diagram, the
conclusion is not clearly diagrammed; therefore, the argument is invalid.
The other important point to mention in a valid syllogism is the fact that the
middle term is always “distributed.” The middle term is the subject or
predicate in both of the premises of the syllogism, but not in the conclusion.
Here’s an example:
Here, monotheists (we italicized it) is the middle term. Our example above
about mothers, bankers, and people with uteruses shows that the term
mothers is the middle term:
A distributed term is one where all members of the term’s class (not just
some) are affected by the claim. Another way of saying this is that, when a
term X is distributed to another term Y, then all members in the category
denoted by the term X are located in, or predicated as a part of the category
denoted by the term Y. So:
• An A-claim “distributes” the subject term to the predicate term, but not
the reverse. “All cats are mammals” means that all members in the
category of cats are distributed to (located in, predicated as a part of
the category of) mammals, but not the reverse.
• An E-claim distributes the subject term to the predicate term, and vice
versa—it’s bi-directional in its distribution. “No cats are dogs” means
that all members in the category of cats are not distributed to all
members in the category of dogs, and vice versa.
• Both of the terms in an I-claim are undistributed. “Some democrats are
conservative” means exactly what it says; some but not all members of
the category of democrats are distributed to the category of
conservatives, and vice versa. (They might be, since some is consistent
with all, but some, by itself, doesn’t imply all.)
• In an O-claim only the predicate is distributed. “Some fire hydrants are
not red things” means that all members in the category of red things
are not distributed to some of the members in the category of fire
hydrants (we know, this sounds strange, but it’s true).
See the inside cover for a list of all fifteen unconditionally valid syllogism
forms.
1. All frigs are pracks. All pracks are dredas. So, all frigs are
dredas.
2. Some birds are white. White things are not black. Therefore,
some birds are not black.
3. A few rock stars are really nice people. Alice Cooper is a
rock star. Hence, Alice Cooper is a really nice person.
4. Democrats are liberals. Some liberals are not
environmentalists. Therefore, some Democrats are
environmentalists.
5. All CFCs (chlorofluorocarbons) deplete ozone molecules.
CFCs are things that are produced by humans. Therefore,
some things produced by humans deplete ozone molecules.
6. Black holes produce intense gravity. We haven’t been able
to study any black holes closely. Therefore, we haven’t
been able to study closely anything that produces intense
gravity.
7. No drugs that can be used as medical treatment should be
outlawed. Marijuana can be used as a medical treatment.
Thus, marijuana should not be outlawed.
8. No genetically engineered foods are subsidized by the
government. Some traditional foods are not subsidized by
the government. Hence, some traditional foods are
genetically engineered foods.
9. People who trust conspiracy theories are not good
witnesses. A few clinically sane people trust conspiracy
theories. So, some clinically sane people are not good
witnesses.
10. Some lies are not immoral acts. Some immoral acts are
fairly harmless. So, some lies are fairly harmless.
But can we put an X here, even though the premises do not indicate one?
Aristotle argued that the premises implicitly indicate the existence of at least
one member in the category. If it is true that “All politicians are public
figures,” there must be some politicians and some public figures in existence.
The problem is that it is not clear that we should ever make an existential
assumption. You might think it is irrelevant. To test for validity, we simply
assume the premises are true, so even if our claims are about mythical
creatures, we assume they are true in order to test for validity. In this case,
we make the existential assumption, then show that one of the premises is
false. But this does not always work. Consider this argument:
F P
F B
F B
Have you ever seen a real unicorn? Have you ever been to Fairyland? You
get the point.
Despite its limitation to arguments with three categories and its ambiguity
about existential assumptions, categorical logic can still be useful. We must
simply recognize and compensate for these limitations. Propositional logic
can help with this, as we will see in the next two chapters.
Exercises
*****
The ignorance and poor judgment of anti-vaccine parents put their own
children and the general public at significantly increased risk of sometimes
deadly diseases. Anger is a reasonable response, and efforts should certainly
be made to persuade all parents to vaccinate their kids save in rare cases of
medical exceptions.
But I part with the commentators who assume that insulting, shaming, and
threatening anti-vaccination parents is the best course, especially when they
extend their logic to politicians. For example, Chris Christie is getting flak
for “pandering” to anti-vaccination parents.
…
But isn’t Christie’s approach more likely to persuade anti-vaccine parents
than likening their kids to bombs?
…
Like [some], I worry about this issue getting politicized. As he notes, there
is presently no partisan divide on the subject. “If at some point, vaccinations
get framed around issues of individual choice and freedom vs. government
mandates—as they did in the ‘Christie vs. Obama’ narrative—and this in
turn starts to map onto right-left differences … then watch out,” he writes.
“People could start getting political signals that they ought to align their
views on vaccines—or, even worse, their vaccination behaviors—with the
views of the party they vote for.”
As a disincentive to this sort of thinking, folks on the right and left would
do well to reflect on the fact that the ideology of anti-vaxers doesn’t map
neatly onto the left or right, with the former willing to use state coercion and
the latter opposing it.
…
When it comes to measles, my tentative thought is that the best way
forward is to downplay the polarizing debate about coercion, wherever one
stands on it, and to focus on the reality that ought to make it unnecessary: the
strength of the case for vaccinating one’s kids, as demonstrated by the
scientific merits of the matter as well as the behavior of every pro-
vaccination elite with kids of their own.
…
Anti-vaxxers should not be pandered to but neither should they be
callously abused. Neither of those approaches achieves what ought to be the
end goal here: persuading enough people to get vaccinated that measles once
again disappears.
*****
*****
Here we introduce propositional logic and explain the basic concepts you will need to
work with truth tables (which you will learn in
Chapter 5) and for constructing propositional proofs (which you will learn in
Chapter 6). We start by taking you all the way back to
Chapter 1, expanding on our discussion of formal languages. We then explain
how to translate English sentences into the symbolic language of propositional
logic.
A New Language
As we have seen, categorical logic is useful, but only for a limited number
of operations. It is not a sufficient logical system for reasoning about more
than four categories, which we often do. And it does not suggest a way to
resolve the problem of “some are not.” Does “some are not” imply that
“some are” or does it leave open the possibility that “all are not”? Both are
consistent with the rules of categorical logic, and therefore, nothing about
the system can help us resolve the inconsistencies that result. What we
need, then, is a more powerful logical system—a system that does not suffer
from these deficiencies. Thankfully, logicians in the twentieth century
developed such a system.
This more powerful system is called propositional logic (also called
sentential logic, the logic of formal sentences). It is the logic of claims or
propositions rather than the logic of categories. It allows us all the power of
categorical logic, plus more—though, as we will see, propositional logic is
more complicated when it comes to categories. Over the next three
chapters, we will cover the basics of propositional logic.
The material in this chapter and the following two is more difficult than
anything in the rest of the book, so prepare yourself to spend a lot of time
with the exercises. If you are using this book in a course, it might help to
think of the classroom as an orientation tool. In class, you’ll learn some of
the basics of reasoning and watch your instructor work with the concepts.
Between classes, you will want to work through the text on your own,
trying to understand and apply the concepts using the “Getting familiar with
…” exercises. Regardless of whether you are using this book on your own
or in a course, you may have to re-read some passages while you are
working through the exercises. This is normal; do not be discouraged. Just
as you have to work math problems on your own and practice musical
instruments to learn them, you must do logic to learn it, and that involves
working problems over and over until the operations become clear.
Learning propositional logic involves learning a new language. You will
learn to translate your natural language (English, Spanish, etc.) into the
formal language of propositional logic. Your natural language allows you to
use words in new and interesting ways, to introduce unique phrases, to
modify grammatical rules as trends come and go, and to make exceptions
for pragmatic or artistic purposes. Formal languages, on the other hand, are
very rigid; all new phrases must follow the “grammar” (or syntax) of the
language very strictly. There are rules for substituting forms of expression,
but there are no exceptions. Despite their stodginess, the rigidity of formal
languages makes them perfect for expressing and reasoning about very
precise, technical claims, such as those found in math, philosophy, science,
ethics, and even religion.
Diego is a lawyer. D
It is imperative that we find out who took the company’s latest sales I
reports.
Notice that a simple claim need not be short. But it must convey only one
simple claim, that is, a single subject-predicate couplet that does not
include any operator (recall our five operators: and; or; not; if…, then…;
if and only if). If a claim does include an operator, you must translate the
operator using its symbol, which we introduced in
Chapter 1 and will review in the next section. While you can choose various
letters for translating claims, you cannot do this for operators. These
symbols are fixed by the system of logic you are using.
Just as in algebra, propositional logic allows you to use empty
placeholders for claims, called variables. These are expressed in lower case
letters, typically taken from the end of the English alphabet: p, q, r, s, t, u, v,
w, x, y, z. Once you replace a variable with a meaningful English sentence,
you use capital letters. So, a conjunction constructed from variables, might
look like this: (p & q), whereas a conjunction of the English claims (It is a
dog and it belongs to me), might look like this: (D & M).
(C & M)
(C v S)
Well-Formed Formulas
Translating becomes more complicated when multiple complex claims are
joined with operators. Parentheses help us make sense of claims in logic the
way punctuation helps us make sense of claims in English. Consider the
following English sentence without punctuation:
“tonight we are cooking the neighbors and their kids are stopping
by around eight”
C⊃M
and
D⊃B
And let’s let C mean <It is a cat>, M mean <It is a mammal>, D mean <It is
a duck>, and B mean <It is a bird>. If we were to conjoin these claims, the
result would be a complex claim stated:
C⊃M&D⊃B
or:
or:
Since only the last expresses what the original claim intends to express,
we need to add something to our translation in order to make this clear. To
keep your complex claims intelligible, mark off the component claims with
parentheses. Translated into propositional logic, A, B, and C would look as
follows:
3. Any two WFFs joined with an operator and enclosed in parentheses is a WFF.
Example 2: “Either it is wet and cold or my eyes are playing tricks”; translation: ((W & C) v T)
Notice that our original attempt at translating our claim is not a WFF:
C⊃M&D⊃B
It contains the WFFs, C, M, D, and B, and these are joined with operators,
but since they are not properly enclosed in parentheses, the resulting
complex claim is not a WFF. The disambiguated interpretations of this
claim (examples A, B, and C above) are all WFFs. Because they are WFFs,
it is easy to determine which meaning is intended.
For comparison, none of the following is a WFF:
))A v B &v~A
((A & B( (A &v B)
(p & q)
This is true for whatever claims stand in for p and q, no matter how
complex. For example, although each of the following three claims is more
complex than (p & q), each is still a conjunction. On the left you will see a
conjunction; on the right, we have underlined each conjunct and highlighted
the major operator in boldface type:
Here are three examples of complex claims with the conditional as the
major operator:
(~A ⊃ B) (~A ⊃ B)
(((A ⊃ B) v C) (((A ⊃ B) v C)
⊃ D) ⊃ D)
The negation operator at the beginning ultimately determines the truth value
of this claim, and so it is its major operator. However complex the claim, if
there is a negation on the outside of all the parentheses, the claim is defined
as a negation; that is its major operator.
Basic Translation
In many cases, English lends itself to propositional translations. “And,”
“or,” and “not” commonly express what is meant by “&,” “v,” and “~.”
Commas are good indicators that one claim has ended, and another is about
to start. This is not always the case, as we will see in the next section. But
for now, consider these examples of how to translate English sentences into
claims of propositional logic. Use these to help you work through the
exercises in the “Getting familiar with … translation” box at the end of this
section.
Examples
1. If I ride with Tim, he will take the long route and stop for coffee.
3. Either he will do a good job and get a raise, or he will remain in middle-management
forever.
(G ≡ A) (A ⊃ G) (G ⊃ A) (G)
5. If it will either rain or snow, then I will either need an umbrella or a parka.
7. Jim was born with male genitalia if and only if he has a Y chromosome.
(M ≡ Y) (M ⊃ Y) (Y ⊃ M) (M)
8. Either I go to John’s and she goes home, or I go to her house and she stays at John’s.
but
p but q. (p & q)
She went to the station but she didn’t take her dog. (S & ~T)
however
p, however, q. (p &q)
He went there after work. However, he drove the company truck. (A & D)
She didn’t go to the store; however, she did stop to get the mail. (~S & M)
furthermore
p. Furthermore, q. (p & q)
The policy was fast-tracked through the committee. Furthermore, it was (F & A)
approved.
moreover
p. Moreover, q. (p & q)
The door does not block sound; moreover, it doesn’t keep out cold. (~B & ~ K)
in addition
She wants the TV. In addition, she wants both chairs. (T & C)
yet
p, yet q. (p & q)
although
The door was locked although I told you not to lock it.
(L & T)
unless
p unless q. (p v q)
You should stop doing that unless you want a knuckle sandwich. (S v K)
Not p. ~p
if
p if q. (q ⊃ p)
only if
p only if q. (p ⊃ q)
so
p, so q. (p ⊃ q)
I’m taller than six feet, so I can touch the door frame. (S ⊃ T)
necessary condition
p is necessary for q (q ⊃ p)
sufficient for
p is sufficient for q. (p ⊃ q)
just in case
p just in case q. (p ≡ q)
A person is born with female genitalia just in case she has two X (W ≡ X)
chromosomes.
just if
p just if q. (p ≡ q)
He’s a lawyer just if he’s been to law school and has passed the bar. (L ≡ (S & B))
The cab’s light’s being on is necessary and sufficient for the cab’s being (A ≡ L)
available.
It’s both not round and not green. (~R & ~G)
You cannot have both ice cream and a candy bar. ~(I & C)
It is not both cool and warm at the same time. ~(C & W)
Translating Arguments
What does learning how to translate do for us? It does two important things.
First, it helps us make our claims clear and precise. Second, it helps us
evaluate an important dimension of arguments, namely, the relationship
between the premises and the conclusion, without the meaning of English
words distracting or misleading us.
In
Chapter 2, we introduced some intuitive cases where a conclusion follows
from a premise or set of premises with certainty. For example:
1. If it is raining, then the sidewalk is wet. 1. Either it is the starter or the battery.
1. (R⊃ S) 1. (S v B)
2. R 2. ~B
3. S 3. S
1. 2. 3. 4.
1. (R ⊃ S) 1. (R ⊃ S) 1. (S v B) 1. (S v B)
2. S 2. ~R 2. S 2. B
3. R 3. ~S 3. ~B 3. ~S
Is this argument valid? It may be difficult to say if you are unfamiliar with
logical forms. Nevertheless, let us translate this English claim into our
simple propositional language, call it example 5:
5.
1. ((D v H) ⊃ (J v C))
2. ~(D v H)
3. ~(J v C)
Even if you don’t know what “diphenhydramine” is, we can look at the
argument’s form and determine that it is not valid. Compare 5 with example
2 above. We told you that 2 is invalid (even though we haven’t explained
why yet). Notice that, when you look at the major operators, 5 has the same
form as 2:
5. 2.
To see this comparison more clearly, think of the argument on the right as a simplified
version of the one on the left. Let “R” be a replacement for “(D v H)” and let “S” be a
replacement for “(J v C).”
Since their forms are identical, if 2 is invalid, then 5 is invalid. This is why
argument form is important. It allows us to see clearly the relationship
between the premises and conclusion.
So, now the question is: How do we tell whether an argument form is
valid? There are a number of methods, and you’ve already learned how to
test simple categorical arguments for validity using the Venn diagram
method in the
previous chapter. Over the next two chapters, we will introduce two
methods for testing validity in propositional logic: the truth table method
and the proof method. In
Chapter 5, we will explain truth tables and the truth table method of testing
for validity. In
Chapter 6, we will explain the most common rules of inference, which you
will use to derive proofs in propositional logic.
Exercises
1. ((P ⊃ Q) & R)
2. ((S v L) ⊃ ~H)
3. ((A ⊃ B) v (C ⊃ D)
4. ((A & B) ⊃ ~(P v Q))
5. ((P ⊃ ~R) & (~R ⊃ (Q & S)))
Real-Life Examples
People who write legal documents attempt to capture all the formality
of formal language in a natural language. They attempt to remove any
and all ambiguities from the language by carefully qualifying every
term. The result is often a jumble of rigid verbiage with no rhythm or
intuitive meaning. Without some training it can be a maddening
translation exercise. Thankfully, with a little practice and some
propositional logic, you can breeze through many legal documents. And
what’s more, you can evaluate them for internal consistency.
1. If the employer acquires a new facility, but does not tell the union
its plan for maintenance until thirteen months after the
acquisition, does the union have a justifiable complaint?
2. If the employer advises the union that the facility will not be
maintained by the employer, but the employer does not dispose of
the property within six months, and the parties do not mutually
agree to an extension, will Article 5 (whatever that is) apply?
3. If the employer does not advise the union that the facility will not
be maintained by the employer, and does not dispose of the
property within six months, and the parties do not mutually
agree to an extension, will Article 5 apply?
4. If the facility is maintained by the employer for thirteen months,
and then decides to sell the building, do they have to advise the
union of the sale?
1 Excerpted from Contracts.OneCLE.com,
http://contracts.onecle.com/liz/unite.labor.2003.06.01.shtml. Accessed 15 December 2014.
5
Truth tables
In this chapter, we explain how to construct truth tables and use them to test
arguments for validity. Truth tables express the possible truth values of a claim. They
help us understand how operators affect the truth values of claims and help us
understand the logical relationships among claims. We explain two truth table methods
(what we call the long method and the short method) for testing arguments for validity.
In a truth table, there are columns, which are the vertical lines containing
truth values (either T or F). And there are rows, which are horizontal lines
containing truth values. Imagine the rows are possible worlds: in world
(row) 1, C is true; in world (row) 2, C is false. Simple enough. But now
imagine that C is not alone.
Let’s conjoin C with the claim, “It is a mammal,” which we will
symbolize with “M,” so that we now have: (C & M). The truth table for (C
& M) shows every possible combination of truth values of C and M. The
possible truth values of each simple claim and their conjunction are listed
vertically, and the combinations are read horizontally. So we begin by
listing all the possible combinations of C and M vertically underneath them:
C & M
T T T
T F F
F F T
F F F
You might have noticed that, even though we only have two truth values
(T and F), the truth table for (C & M) has two more lines than the truth
table for C. Every time we add a new claim to a truth table you will need to
add lines. If we had added another C or another M, we would not need to
add lines. But if we added a new claim, let’s say, “D,” we would. Consider
the following two truth tables:
Notice that, in 1, since we already know all the possible truth value
combinations of C, we only need to repeat them under the second C.
Remember, rows are like possible worlds; C could not be both true and
false at the same time in the same world. Therefore, if C is true in one place
on a row, it is true at every place on that row. If it is false in one place on a
row, it is false at every place on that row.
In 2, since we have added a constant, we need a new set of truth values to
make sure we cover all the options. This suggests we need a pattern that
will show us every possible combination of truth values. To construct this
pattern, we need to first know how many rows of truth values we need. To
know how many rows to add for each new variable, use the formula 2x,
where x stands for the number of distinct simple, unrepeated claims (again,
don’t count repeated claims):
So the truth table for a single, simple claim has just two rows:
1. T
2. F
T T T T T T
T F T F T F
F T F T F T
F F F F F F
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F
TIP: An alternative method is to start with a column of alternating truth values (T, F, T,
F, etc.), and then double the Ts and Fs for each additional simple proposition (second
proposition: T, T, F, F, T, T, F, F; third proposition: T, T, T, T, F, F, F, F; etc.). Both
methods yield all possible combinations of truth values.
1. (P v Q)
2. (P ⊃ R)
3. ((A v B) & A)
4. ((A v A) & A)
5. ((A ⊃ B) v C)
6. (Q ⊃ ((R v P) & R))
7. (Q ⊃ ((R v P) & S))
8. ((B ≡ C) ⊃ (D ≡ E))
9. (~(C & F) v ~(F & C))
10. ~((D v L) ≡ (M v L))
If we apply the negation (~) operator to this simple claim, all of A’s possible
truth values change to their opposite:
~A
FT
TF
This is obvious when you consider any claim (“It is a cat,” “Submarines are
boats,” “The sky is black”) and its negation (“It is not a cat,” “Submarines
are not boats,” “The sky is not black”). So, for any claim, no matter how
complex, if a negation is added to that claim, change all of its possible truth
values. For instance, consider the claim:
The major operator of this monster is the bi-conditional (≡). Imagine that
the truth values under the bi-conditional begin, T, F, F, F…:
If we then add a negation to the claim (which then becomes the major
operator), we change all the truth values to their opposites (also called their
contradictories):
TIP: For simple claims that are operated on, for example, with a negation, always list
the truth values for a simple claim first, then change it according to the operator. Don’t
try to guess ahead and list only the truth values for the negated claim, leaving out the
column for the simple claim. You’ll almost certainly lose track and get confused,
especially in longer claims and arguments.
We’ve seen several examples of the truth table for the conjunction (&),
but we have yet to explain why it has the truth values it has:
A&B
T TT
T FF
F FT
F FF
Conjunctions are true if and only if both conjuncts are true. If either, or
both, of the conjuncts is/are false, the conjunction is false. So, in a world
where both conjuncts are true (row 1), the conjunction is true. In any other
world (where at least one conjunct is false, rows 2–4), the conjunction is
false.
Disjunctions are much more forgiving than conjunctions. With a
disjunction, as long as one of the disjuncts is true, the disjunction is true:
AvB
TTT
TTF
FTT
FFF
This is most intuitive when one disjunct is true and one is false (lines 2 and
3 of the table). For instance, “Either the sky is blue or the moon is made of
green cheese,” or “Either pigs fly or you did not win the lottery.” Also,
“Either Texas is on the west coast or California is on the west coast.”
It is also pretty easy to see that a disjunction is false if both disjuncts are
false. For example, the disjunction, “Either the moon is made of green
cheese or Texas is on the west coast,” is clearly false because neither
disjunct is true. The same goes for: “Either pigs fly or grass is blue.”
Things are less clear when we consider a disjunction where both
disjuncts are true. For instance, “Either Texas is on the west coast or
California is on the west coast.” Some people might say this disjunction is
false, because true disjunctions require that one disjunct be false. But
whether this is true depends on whether you interpret the “or” (v) as
inclusive or exclusive.
An exclusive or requires that one disjunct is false. An inclusive or allows
that both disjuncts can be true. There are reasons for preferring the
exclusive or, for instance, because there are disjunctions where it is
impossible that both disjuncts are true. In the disjunction, “Either it is
raining or it isn’t,” both disjuncts cannot be true. The same goes for any
claim and its negation: “Either it is a cat or it isn’t,” “Either you will win
the lottery or you won’t,” “Either God exists or he doesn’t.”
But logicians prefer the inclusive or. In most cases, it is possible for both
disjuncts to be true: “He’s either a dad or he’s a soccer player, and he might
be both,” “Either she really likes cookies or she’s a surfer, and she might be
both.” In addition, the inclusive or can accommodate the cases where the
disjuncts are mutually exclusive. For instance, “Either it is raining or it
isn’t, and it can’t be both,” and “He’s either insane or a liar, but he isn’t
both,” can be translated using the inclusive “or” as follows:
Once you get the hang of conditionals, the bi-conditional (≡) will come
naturally. A bi-conditional is a conditional that works in both directions; the
antecedent implies the consequent and the consequent implies the
antecedent. For instance: (A ≡ B) means the same as ((A ⊃ B) & (B ⊃ A)).
A bi-conditional is an “if and only if” claim. So, the bi-conditional: “An
animal is a mammal if and only if it is a vertebrate, warm-blooded, has hair,
and nourishes its young from mammary glands,” means the same as: “If an
animal is a mammal, then it is a vertebrate, warm-blooded, has hair, and
nourishes its young from mammary glands, and if an animal is a vertebrate,
warm-blooded, has hair, and nourishes its young from mammary glands,
then it is a mammal.” In addition, the bi-conditional, “An object is an
electron if and only if it has –1.602 × 10–19 Coulombs charge,” means the
same as, “If an object is an electron, then it has it has –1.602 × 10–19
Coulombs charge, and if an object has –1.602 × 10–19 Coulombs charge,
then it is an electron.”
The truth table for a bi-conditional looks like this:
A ≡ B
T T T
T F F
F F T
F T F
If both sides of a bi-conditional are true, the bi-conditional is true (row 1).
If both sides are false, the bi-conditional is also true (row 4). For instance,
there are exactly five chairs in the room if and only if the number of chairs
in the room equals the positive square root of 25. If there are ten chairs in
the room, then, though both sides of the bi-conditional are false, the bi-
conditional remains true.
If either side of the bi-conditional is false, the bi-conditional is false.
Since a bi-conditional is just two conditionals conjoined, it follows the rules
of conditionals and conjunctions. If one side of a bi-conditional is false, it is
equivalent to one of the following truth tables:
To evaluate this argument using truth tables, first translate each of the
simple claims into propositional logic, using capital letters:
1. If Ichiro is a liberal (L), then either he will vote for the Democratic
candidate (D) or he will not vote (V).
2. Ichiro is a liberal (L).
3. Ichiro will vote (V).
4. Therefore, Ichiro will vote for the Democratic candidate (D).
Dropping the natural language and adding the operators (remember, the
“nots” are operators), we have:
Now we have fully translated the argument. Next, list the premises side by
side horizontally, separating them with a semicolon (;). Then list the
conclusion at the far right after the conclusion sign (/.:):
Now, construct the truth table. There are only three unique simple claims,
L, D, and V. Therefore, you will need 23 rows, which is 8. Even though we
have more than one claim comprising our argument, we apply the pattern
for operators from the very first section in this chapter, just as we would
with individual claims. And that gives us the following basic structure:
T T T T T T
T T F T F T
T F T T T F
T F F T F F
F T T F T T
F T F F F T
F F T F T F
F F F F F F
Then, fill in the truth values for the complex claims, according to the
relationships set out in the second section (Truth Tables for Operators):
T T T T FT T TFT T
T T T TTF T FTF T
T F F F FT T TFT F
T T F T TF T FTF F
F T T T FT F TFT T
F T T T TF F FTF T
F T F F FT F TFT F
F T F T TF F FTF F
The truth values in boldface type are the possible truth values for each of
the claims in the argument.
The only truth values you need to compare are those of the major operators
of the premises and the conclusion. As we can see, there are no rows where
all the premises are true and the conclusion is false. That means this
argument is valid.
We call this the “long method” of testing for validity because we have
constructed every row of the truth table. This becomes unruly when you
have more than four operators (an argument with five operators has thirty-
two rows). We will explain a shorter method in the following section. But
before that, here are three more short arguments to help you understand
testing for validity with truth tables:
Example 1:
(A v C) ; (C v D) /.: D
T TT T TT T
T TT T TF F
T TF F TT T
T TF F FF F
F TT T TT T
F TT T TF F
F FF F TT T
F FF F FF F
We’ve placed boxes around the rows where the conclusion is false (rows 2,
4, 6, and 8). Looking left, we see that, in rows 2 and 6, all the premises are
true and the conclusion is false. There doesn’t have to be two rows like that.
If even one row has all true premises and a false conclusion, the argument is
invalid.
Rows 2 and 3 have false conclusions, and only row 1 also has all true
premises. But that is enough to render this argument invalid.
Rows 2 and 4 have false conclusions in this argument, but in neither case
are all the premises true. Since there is no row (possible world) in which all
the premises are true and the conclusion is false, this argument is valid.
F F F
Now, is it possible to make the premises true? Look at the second premise.
If even one conjunct in a conjunction is false, the conjunction is false. This
conjunction is negated, so no matter what value we assign to A, premise 2
is true:
F T_ F F
This means we are free to make A true or false on this row in some other
premise in order to try to make that premise true. In this case, that’s easy.
Look at the first premise. Any conditional with a true consequent is true,
and once we apply the negation to B, the consequent is true:
This means that it doesn’t matter whether A turns out to be true or false; we
know there is at least one row where all the premises are true and the
conclusion is false. Therefore, this argument is invalid.
Just to show you this really works and is not just a trick, let’s look at the
full truth table for this argument:
The columns with the truth tables for the major operators are in boldface
type. Using the long truth table method, we look at rows 2 and 4, where the
conclusion is false, and then look left to see if all the premises are true. In
fact, they are in both cases. And those are exactly the rows we constructed
in the short method. If we had rendered A true, we would have constructed
row 2. If we had rendered A false, we would have constructed row 4.
Now consider this longer argument:
The conclusion is, as in the previous example, a simple claim, D. So, begin
constructing your truth table by stipulating that D is false. We are only
concerned about the rows where the conclusion is false:
F
Since the truth value for D must be consistent throughout the row, go ahead
and label any other instance of D with an F:
F F
Now, try to make the premise true. In this case, there is only one premise,
with a conjunction (&) as the major operator. In order to be true, a
conjunction must have two true conjuncts. Since the second conjunct is a
simple claim, E, label E as T:
F T F
The first conjunct is a bit trickier. The first conjunct is a conditional (⊃). We
know that the only way a conditional can be false is if the antecedent is true
and the consequent is false. So, all we have to do is find one combination of
truth values that does not end up in this configuration. As it turns out, any
of the following will do:
We only needed one row where the left conjunct is true, but there are four.
But since we have at least one row where all the premises are true and the
conclusion is false, the argument is invalid:
TF F T FFF TT F
FF T T FFF TT F
FF F T FFF TT F
TT T T TTT T T F
TIP: Remember, when testing for validity, all you need to find is one row where the
premises are true and the conclusion is false. Therefore, once you find one, you’re
done; the argument is invalid. We need not have constructed the last three lines once
we constructed the first. Any of the four does the trick.
Now, if you cannot construct a line where the conclusion is false and the
premises are true, then the argument is valid. Consider the following
example:
FT
Since the truth value of D must remain consistent throughout the argument,
label all other instances of D as T (don’t accidentally label it F; it is only the
negation operator that does that in the conclusion):
T FT
Now, attempt to make the rest of the premises true, beginning with the
major operator of each. The major operator in the first premise is the
negation (~). In order to make a negation true, the operand, in this case (A v
B), must be false. The major operator of this claim is a disjunction (v). The
only way to make a disjunction false is to have two false disjuncts. Since
nothing is preventing us from labeling them both “false,” go ahead and do
so:
T F FF T FT
So, the first premise is true. Now attempt to make the second premise true.
Since the second premise includes A and B, and since a claim’s truth value
must remain consistent throughout a line, we are forced to label A and B as
F:
TFFF T FFF FT
But now we have a conditional (the major operator of the second premise)
with a true antecedent, D, and a false consequent, (A v B), which is false:
TFFF TF FFF FT
So, we can see that there is no way to make the premises true and the
conclusion false. This argument is valid. Try working this one again, but
this time, start by trying to make the second premise true, then move on to
the first premise.
Arguments with complex conclusions are trickier. In those cases, you
must construct every possible way the conclusion can be false, then see if
the premises could all be true on each line. If the premises cannot all be
made true on the first line, move to the next. If the premises can all be made
true on the first line, you can stop; the argument is invalid. We’ll work three
of these together, then we’ll list five more for you to work on your own.
With the short method, we always begin by making the conclusion false.
When the conclusion is a disjunction this is easy because there is only one
way a conclusion can be false, that is, when both disjuncts are false:
FFF
Remember, once you’ve stipulated that P and W are false, they must be
labeled false in every other instance on the row:
F F F F F FFF
Now check to see if it’s possible to make all the premises true. In this case,
since two premises are conjunctions (the first and third), and since one
conjunct of each is already false ((R&W) in the first and R in the third),
then at least two of our premises cannot be true. There is no way to make all
the premises true if two must be false. Therefore, this argument is valid:
TTT FF
TTF FF
FTT FF
Now, label the other simple claims, letting the stipulations we made in the
conclusion determine the truth values of the simple claims in the premises:
T F T T TTT FF
T F F F TTF F F
F F T T FTT FF
Now see if it’s possible to make all the premises true. Since we already
know the second and third premises on row two are false, we only need to
consider lines 1 and 3:
Looking at row 1, we see that, since the truth values for P and Q were
determined by the conclusion, and since a bi-conditional (≡) is true if and
only if either both sides are true or both sides are false, we cannot make all
the premises on line 1 true:
Looking, then, at row 3, we see that the first premise is true and the second
premise is true:
Whether this argument is valid depends on whether we can make the third
premise true. The third premise is a conjunction (&), and the only way a
conjunction can be true is if both conjuncts are true. We already know the
right conjunct is true. Can we make the left conjunct true? Since the truth
value is not determined by the conclusion, we are free to stipulate whatever
truth value we wish. Therefore, in order to make the conjunction true we
only have to make R false, so that ~R will be true:
Since there is at least one line where all the premises are true and the
conclusion is false, the argument is invalid.
Exercises
1. ((P ⊃ Q) & P)
2. (L ⊃ ~L)
3. ((A v B) v C))
4. ((P ≡ Q) & Q)
5. (P v ~P)
6. ((A & B) ⊃ (B v C))
7. ((C ≡ C) ≡ D)
8. ((S ⊃ ~P) v ~R)
9. ((B & ~C) v C)
10. (~R v ~S)
1. ~A
2. (P & Q)
3. (R v S)
4. (X ⊃ Y)
5. (M ≡ N)
C. Test each of the following arguments for validity using the
long method:
1. (A ⊃ ~A) /.: A
2. (P ≡ Q) ; (R v ~Q) /.: (R ⊃ P)
3. (P ≡ ~Q) /.: (Q v P)
4. ~(P ≡ Q) ; ~(R v ~Q) /.: ~(R ⊃ P)
5. ~(~A ⊃ A) /.: ~(A v A)
6. H /.: (I ⊃ (H & K))
7. (A ⊃ B) /.: (B ⊃ A)
8. ~(A ⊃ B) /.: (~A & ~B)
9. ~(P v Q) /.: ~Q
10. ~P ; ~Q ; (P & Q) /.: (~P v R)
Real-Life Examples
No matter how abstract logic can be, we’re never more than a few steps
away from the real world. Below, we’ve excerpted
“The evidence is the four tapes. You heard those four tapes. I don’t have to tell you what they
say. You guys are in politics, you know what we have to do to go out and run and run
elections.
“There was no criminal activity on those four tapes. You can express things in a free country,
but those four tapes speak for themselves. Take those four tapes as they are and you will, I
believe, in fairness, recognize and acknowledge, those are conversations relating to the things
all of us in politics do in order to run campaigns and try to win elections.”1
“The evidence is the four tapes. You heard those four tapes. I don’t have to tell you what they
say. You guys are in politics, you know what we have to do to go out and run and run
elections.
“There was no criminal activity on those four tapes. You can express things in a free country,
but those four tapes speak for themselves. Take those four tapes as they are and you will, I
believe, in fairness, recognize and acknowledge, those are conversations relating to the things
all of us in politics do in order to run campaigns and try to win elections.”
We explain how to use propositional logic to test arguments for validity. We introduce
rules of inference, rules of replacement, and proof strategies, and we explain how to
construct simple proofs for validity in propositional logic. At the end of the chapter, we
discuss three of the most common fallacies in deductive reasoning.
Deductive Inference
Propositional logic is a formal language, which means that, as long as we
reason according to a valid argument form, there is no loss of truth-value
when we draw inferences from true premises. But, even assuming all our
premises are true, how do we know when we have a valid argument form?
Thankfully, logicians have discovered a set of rules to guide us in
determining when a conclusion follows necessarily from a premise or set of
premises. Rules that preserve truth-value are called valid for the same
reasons that a good deductive argument is valid: It is impossible for the
premises to be true and the conclusion false. If the premises are true, that
truth is preserved through the inference to the conclusion. And so,
propositional logic, like mathematics, is a truth-preserving system.
In this chapter, we will introduce you to eight valid rules of inference,
eleven valid rules of replacement, and two valid proof strategies.
Remember, the term “validity” applies exclusively to deductive arguments.
As we will see in Part Three of this book, all inductive arguments are
invalid, that is, it is always possible for the premises to be true and the
conclusion false. Therefore, we will need a different set of criteria for
evaluating the relationship between the premises and conclusion in
inductive arguments. We’ll begin with four basic rules of inference, then
increase the complexity as we go through the chapter.
Simplification
Conjunction
Modus ponens
Modus tollens
Simplification
Recall from
Chapter 5 that the truth table for a conjunction looks like this:
P&Q
TTT
TFF
FFT
FFF
A conjunction is true if and only if both conjuncts are true. In order to test
whether an argument is valid using rules of inference, we assume the
premises are true and then try to derive (using the rules you will learn in
this chapter) a certain conclusion. This is backward from the way we tested
for validity with truth tables (assuming the conclusion is false and looking
to see if all the premises are true). Here, we will assume all the premises are
true, and then apply our rules to determine whether certain conclusions
follow.
Because we are assuming the premises are true, if we come across a
conjunction, we can assume both conjuncts are true. We can do this because
we know that, in a true conjunction, both conjuncts are true. Therefore,
from a conjunction such as (P & Q), we are permitted to derive either of its
conjuncts:
1. 2.
1. (P & Q) or 1. (P & Q)
2. P 2. Q 1, simplification
The rule that permits this inference is called simplification. When you
apply a rule, cite the premise number to which you applied the rule and the
name of the rule. This allows you (and others) to see, keep track of, and
check your reasoning. It may help to see a couple of English examples:
3. 4.
1. It is snowing and it is cold. 1. He’s a lawyer and he’s jerk.
5. 6.
2. (A⊃B) 1, simplification 2. C
3. (D & E)
4. E 3, simplification
7. 8.
3. (P ⊃ Q) 1, simplification 3. (A v B)
4. ~B 2, simplification
Caution: Notice that, in example 7, you cannot derive B or C from these premises. In
premise 2, the major operator is a conditional, not a conjunction. You would need a
different rule to break up the conditional before you could apply simplification.
Conjunction
Since a true conjunction is just a conjunction with true conjuncts, we can
also construct conjunctions from true claims. For instance, if we already
know that some claim S is true and that some claim B is true, it follows that
(S & B) is true. Therefore, we can reason as follows:
1. 2.
1. A 1. P
2. B 2. Q
The rule that permits this is, therefore, aptly named conjunction. When
you apply conjunction to a set of premises, note off to the side both or all
the premises you use to derive the conjunction. Here are two English
examples:
3. 4.
3. Hence, it is cloudy and it is cold. 3. The president is a man and lives in the capital
city.
1, 2 conjunction
5. 6.
1. A 1. X
2. (P v Q) 2. Y
4. (X & Z) 1, 3 conjunction
2. (Z v X) 2. D
1, 2 conjunction 1, 2 conjunction
Modus Ponens
Recall the truth table for the conditional (⊃):
P⊃Q
TTT
TFF
FTT
FT F
A conditional is false just in case the antecedent is true and the consequent
is false. Remember that when we find a conditional in a set of premises, we
assume it is true for the sake of evaluating the argument. Therefore, for
whatever conditional we find, if we know its antecedent is true, we also
know its consequent is true. So, if we assume a conditional (P ⊃ Q) is true
and we also discover that its antecedent P is true, the consequent Q is also
true. Here it is in standard form:
1. 2.
1. (P ⊃ Q) 1. R
2. P 2. (R ⊃ S)
The rule that permits us to draw this inference is called modus ponens,
which is short for modus ponendo ponens, and is Latin for “mode (or
method) that affirms by affirming.” Again, cite all the premises you use to
apply the rule. Here are some English examples:
3. 4.
1, 2 modus ponens
5. 6.
1. A 1. (B & C)
2. (C v D) 2. D
3. (A ⊃ (E & F)) 3. (D ⊃ E)
TIP: Notice that it doesn’t matter what order the premises are in. In example 5, we find
the conditional on line 3 and the simple claim is on line 1. Nevertheless, we can still
infer the consequent of line 3 using modus ponens.
7. 8.
1. ~(X v Y) 1. (D & B)
2. ~Z 2. (B v ~H)
3. (~(X v Y) ⊃ (Z v W)) 3. A
5. ~E 2, 5 modus ponens
Modus Tollens
Again, look at our truth table for the conditional:
P⊃Q
TTT
TFF
FTT
FTF
Notice that there are only two lines where the consequent is false. Notice
also that only one of these two expresses a true conditional. If the
consequent of a conditional is denied, that is, if we discover that the
consequent is false (expressed by adding a negation: ~Q), there is only one
line that expresses a true conditional, and that is line 4. If Q is false and P is
true, the conditional is false. But if Q is false (~Q is true) and P is false (~P
is true), the conditional, (P ⊃ Q), is true.
Therefore, if we come across a conditional (assuming, as we have been,
that it is true) and discover also that its consequent is false, we can conclude
that its antecedent is false, too. The inference looks like this:
1.
1. (P ⊃ Q)
2. ~Q
3. ~P 1, 2 modus tollens
The rule that allows us to draw this inference is called modus tollens, which
is short for the Latin, modus tollendo tollens, which means, “mode
(method) of denying by denying.” An English example helps make this
inference clear:
2. 3.
1. If you are caught stealing, 1. We’ll call the police only if there’s a riot.you are going to
jail.
2. You are not going to jail. 2. But there won’t be a riot.
3. Therefore, you are not caught 3. Therefore, we won’t call the police.stealing.
1, 2 modus tollens
Notice that there may be many reasons for going to jail and stealing is just
one. So, if you’re not going to jail, then you are definitely not caught
stealing. Similarly, we may do many things if there is a riot, and the first
premise tells us that calling the police is one. But if there won’t be a riot, we
won’t call the police. Here are a few more examples:
4. 5.
1. (A ⊃ B) 1. P
2. (C v D) 2. ~Q
3. ~B 3. (R ⊃ Q)
6. 7.
3. ~(R v S) 3. ~P 1, simplification
TIP: Notice in example 7 that applying simplification makes it possible for us to use
modus tollens.
Disjunctive Syllogism
Addition
Hypothetical Syllogism
Constructive and Destructive Dilemmas
Disjunctive Syllogism
As we saw in
Chapter 3, a syllogism is a valid deductive argument with two premises.
The syllogisms we evaluated in that chapter were “categorical” meaning
they had a quantifier such as “all,” “none,” or “some.” A disjunctive
syllogism is not necessarily categorical, and in this chapter, because we are
not yet ready for propositional quantifiers, none will be explicitly
categorical.
As you might expect, a disjunctive syllogism includes at least one
premise that is a disjunction. We also know that, for a disjunction to be true,
at least one disjunct must be true. Therefore, if we find a disjunction in the
premises, we can assume that at least one disjunct is true. If, in addition, we
discover that one of the disjuncts is false, we can conclude the other
disjunct must be true. For example:
1. 2.
1. (P v Q) 1. (P v Q)
2. ~P or 2. ~Q
3. Q 3. P 1, 2 disjunctive syllogism
1. (P v Q) 1. (P v Q)
2. P or 2. Q
3. ? 3. ?
3. 4.
1. (P v ~P) 1. (Q v ~Q)
2. ~P or 2. ~~Q
3. ~P 3. Q 1, 2 disjunctive syllogism
Notice that, in both C and D, premise 2 means exactly the same thing as the
conclusion. This is a consequence of denying one side in an exclusive
disjunction; the result is simply the only other possibility, namely, the other
disjunct. So, if it’s either raining or it’s not, and you learn that it is not not
raining (~~R), then you learn, by definition, that it is raining. Therefore,
treating all disjunctions inclusively works perfectly well even when the
disjunction turns out to be exclusive.
Here are two English examples and four more symbolic examples:
5.
1. Your aunt is either married to your uncle or she’s married to someone else.
1, 2 disjunctive syllogism
6.
1, 2 disjunctive syllogism
7. 8.
1. ((A v B) v D) 1. ((A v B) v D)
2. ~D 2. ~D
3. (A v B) 1, 2 disjunctive syllogism 3. ~A
4. (A v B) 1, 2 disjunctive syllogism
5. B 3, 4 disjunctive syllogism
9. 10.
Addition
This next rule may be one of the easiest to perform, though it is often one of
the most difficult to understand—at least, when you first see it. Recall that
the truth table for disjunction shows that a disjunction is true just in case
one disjunct is true, that is, the only false disjunction is one where both
disjuncts are false:
PvQ
TTT
TTF
FTT
FFF
Given this truth table, as long as we know one disjunct is true, then
regardless of what claim is disjoined to it, we know the disjunction will be
true. Therefore, if we have a claim we know to be true (or that we are
assuming is true), then we can disjoin anything to it and the resulting
disjunction will be true. For instance, if we know it is true that “Grass is
green,” then we also know it is true that “Either grass is green or the moon
is made of green cheese.” Similarly, if we know that “Humans need oxygen
to survive,” then we know that “Either humans need oxygen to survive or
Martians control my camera.”
So, for any claim you know (or assume) to be true, you may legitimately
disjoin any other claim to it by a rule called addition (not to be confused
with conjunction).
1. 2.
1. A 1. (A & B)
3.
1. (P & Y)
2. ~R
You can derive this conclusion using only the rules we’ve covered so far.
Start by using simplification to isolate P, then use disjunctive syllogism to
isolate W. And then use conjunction to conjoin P and W:
3.
1. (P & Y)
2. ~R
4. P 1 simplification
5. W 2, 3 disjunctive syllogism
6. (P & W) 4, 5 conjunction
3.
1. (P & Y)
2. ~R
4. P 1 simplification
5. W 2, 3 disjunctive syllogism
6. (P & W) 4, 5 conjunction
7. Y 1 simplification
8. ?
3.
1. (P & Y)
2. ~R
3. (W v R) /.: ((P & W) v Y)
4. P 1 simplification
5. W 2, 3 disjunctive syllogism
6. (P & W) 4, 5 conjunction
Since this rule is a more technical tool of logic, English examples do not
tend to make the rule clearer. But just in case, here are two English
examples and three more symbolic examples:
4. 5.
6. 7.
1. P 1. ((A v B) ⊃ C)
2. ((P v Q) ⊃ Y) 2. (D ⊃ E)
3. (P v Q) 1 addition 3. A
5. C 1, 4 modus ponens
6. (C v (D ⊃ E)) 5 addition
8.
1. P
2. (P v Q) 1 addition
Hypothetical Syllogism
Our next rule allows us to derive a conditional claim from premises that are
conditionals. This one is fairly intuitive. Consider the following two
premises:
1. 2.
1. (P ⊃ S) 1. (H ⊃ B)
2. (S ⊃ R) 2. (B ⊃ L)
3. (P ⊃ R) 3. (H ⊃ L) 1, 2 hypothetical syllogism
3. 4.
1. If the levies break, 1. If the dog barks, there is something the town
floods. in the yard.
3. If the levies break, we will have to 3. If the dog barks, I need to get out
evacuate. of bed.
1, 2 hypothetical syllogism
5. 6.
2. ((D & E) ⊃ B) 2. (B ⊃ A)
3. ((A v B) ⊃ B) 3. (B ⊃ (D v F))
1, 2 hypothetical syllogism 1, 2 disjunctive syllogism
7. 8.
2. (P v (R ⊃ S)) 2. (Z ⊃ (P v Q))
4. (R ⊃ S) 4. ((W v Q) ⊃ (P v Q))
Constructive Dilemma
1. 2.
2. (P v R) 2. (R ⊃ S)
3. (Q v S) 1, 2 constructive dilemma 3. (P v R)
3.
1. My choices are either to take this job or to go to college.
2. If I take this job, I will have extra spending money now, but if I go to college, I will have
an opportunity to make more money in the long run.
3. So, I will either have extra spending money now or the opportunity to make more money in
the long run.
The constructive dilemma preserves truth for the same reason that modus
ponens does. In fact, some logic texts describe the constructive dilemma as
a combination of two modus ponens operations. If P implies Q, then if P is
true, Q is true. And if R implies S, then if R is true, S is true. So, if we also
know that either P is true or R is true, since at least one of them must be
true, we also know that either Q is true or S is true. Here are two more
examples:
4. 5.
3. (P v Q) 3. (~R v ~S)
Destructive Dilemma
1. 2.
2. (~Q v ~S) 2. (R ⊃ S)
3.
1. If the cronies get their way, you’ll vote for Thatcher’s policy, and if the bleeding hearts get
their way, you will vote for Joren’s policy.
2. So, either you won’t vote for Thatcher’s policy or you won’t vote for Joren’s.
3. Either way, the cronies are not getting their way or the bleeding hearts are not getting
theirs.
4. 5.
2. (~C v ~S) 2. (A ⊃ D)
1. 2.
1. ((A v B) ⊃ C) 1. (H v P)
2. (F & D) 2. ((H v A) ⊃ C)
3. (C ⊃ (E v H)) /.: ((A v B) ⊃ (E v H)) 3. ~P /.: (H v A)
3. 4.
1. (~P v (D v Z)) 1. (~S ⊃ R)
2. (~(D v Z) v B) 2. (R ⊃ B)
3. ~B /.: (~P) 3. (B ⊃ ~Q) /.:
(~S ⊃ ~Q)
5. 6.
1. ((P v Q) v (~R v ~S)) 1. (M v N)
2. ~(P v Q) 2. (A & R)
3. ~~S /.: ~R 3. (B v S)
/.: (((M v N) v (O &
P)) v (Q v R))
7. 8.
1. (((P v Q) & (R & S)) & (T v U)) 1. (R ⊃ Q)
2. (A & B) /.: (B v P) 2. (Q ⊃ (W v
X)) /.: ((R ⊃ (W v X)) v P)
9. 10.
1. A 1. ((X v Y) v ~Z)
2. ((A v B) ⊃ ~C) 2. (W v ~~Z)
3. (~C ⊃ F) /.: ((A v B) ⊃ F) 3. ~Y
4. ~W /.: X
11. 12.
1. (~S ⊃ Q) 1. ((F v E) ⊃ (G &
H))
2. (R ⊃ ~T) 2. (G v ~F)
3. (~S v R) /.: (Q v ~T) 3. (~R & ~J) /.:
(U & ~T)
13. 14
1. ((H ⊃ B) & (O ⊃ C)) 1. (H ⊃ (D & W))
2. (Q ⊃ (H v O)) 2. (D ⊃ K)
3. Q /.: (B v C) 3. H /.: (H & K)
15. 16.
1. ((B ⊃ (A v C)) 1. ((A v B) ⊃ C)
2. (B & ~A) /.: C 2. ((C v D) ⊃ (G
v F))
3. (A & ~G) /.: F
17. 18.
1. ((A & B) ⊃ ~C) 1. ((~A v ~L) ⊃
~G)
2. (C v ~D) 2. (~A ⊃ (F ⊃ G))
3. (A ⊃ B) 3. (A ⊃ D)
4. (E & A) /.: ~D 4. ~D /.: (~F v
H)
19. 20.
1. (F ⊃ (G ⊃ ~H)) 1. (P ⊃ (Q ⊃ (R v
S)))
2. ((F & ~W) ⊃ (G vT)) 2. (P & Q)
3. (F & ~T) 3. (S ⊃ T)
4. (W ⊃ T) /.: ~H 4. (~T v ~W)
5. W /.: (R v W)
Reiteration
The simplest replacement rule, a rule called reiteration, doesn’t even
require that we rearrange anything: For any claim, P, if it is a premise or
derived validly, it can be reiterated at any point in the argument without loss
of truth value:
1. 2. 3.
1. P 1. (P v Q) 1. (A ⊃ (B v C))
2. P 2. (P v Q) 2. (A ⊃ (B v C)) 1 reiteration
In the case of reiteration, it is obvious why truth value is preserved; it’s the
same claim! So, the rule of reiteration is stated like this:
Reiteration: P ≡ P
But there are alternative ways of expressing claims that are not as obviously
equivalent. But after you see how replacement works, further examples will
make sense. So, rather than explaining how each rule of replacement works,
we will explain three more replacement rules and then provide ample
examples of the rest. Once you understand what a rule of replacement does,
all that is left is to memorize and practice using them. You can find the
complete list of these rules on the inside cover of this book.
Note: Some logic textbooks treat reiteration as a rule of inference rather than as a rule
of replacement: From any true claim, you may validly infer that claim. This makes no
difference in the practice of constructing proofs. We place it among rules of
replacement for teaching purposes; we find that it makes the role of replacement rules
clearer.
Double Negation
Something you may have already guessed from working with truth tables is
that another way of expressing P, apart from simply restating it, is to add
two negation operators: ~~P. This makes sense on a truth table: if P is true,
and ~P changes the truth value to F, then ~~P changes it back to true:
1. 2.
~~P ~~(A v B)
TFT TF T T T
FTF TF T T F
TF F T T
FT F F F
3. 4.
1. P 1. ~~~~P
3. P 2 double negation
Transposition (Contraposition)
The rule of replacement known as transposition (also called contraposition)
preserves truth for the same reason that modus tollens does. For any
conditional, if you know the negation of its consequent, you can derive the
negation of its antecedent:
modus tollens
1. (P ⊃ Q)
2. ~Q
3. ~P
Transposition says, if you know (P ⊃ Q), then, even if you don’t know ~Q,
you do know that if ~Q is true, then ~P is true: (~Q ⊃ ~P). So, any
conditional is truth-functionally equivalent to a conditional in which its
antecedent and consequent are flipped and negated. The rule looks like this:
DeMorgan’s Laws
Logician Augustus DeMorgan (1806–1871) identified the next set of
replacement rules for whom they are now named. If it is true that P is not
true and Q is not true (~P & ~Q) then the disjunction of P and Q (P v Q)
cannot be true, since neither conjunct is true: ~(P v Q). This means that (~P
& ~Q) is truth-functionally equivalent to ~(P v Q).
Similarly, if it is true that either P is not true or Q is not true (~P v ~Q),
then the conjunction of P and Q (P & Q) cannot be true, because at least one
conjunct is false: ~(P & Q). This leaves us with two very handy versions of
the same replacement rule called DeMorgan’s Laws:
Commutativity Associativity
(P v Q) ≡ (Q v P) ((P v Q) v R) ≡ (P v (Q v R))
(P ≡ Q) ≡ (Q ≡ P)
Tautology
P ≡ (P v P)
P ≡ (P & P)
1. 2.
3. 4.
5. 6.
1. (P & Q) ⊃ R) 1. (P v T)
7. 8.
1. (R & Z) 1. (P ⊃ Q) ⊃ (~P v Q)
9. 10.
1. (Q v (R & S)) 1. (P ≡ P)
Conditional Proof
A conditional proof allows us to construct a conditional from a set of
premises that may or may not include a conditional. Here’s the trick: Since
one sufficient condition for a true conditional is that it has a true consequent
(every conditional with a true consequent is true), then, for any claim we
know to be true, we can construct a conditional with that claim as the
consequent. We can see this more clearly seen by recalling the truth table
for a conditional:
This means that, if we know B is true, we can construct the following true
conditional: (A ⊃ B). Regardless whether A is true or false, the conditional
is true. Now, consider the following argument:
1.
1. (A & B)
2. C /.: (D ⊃ A)
To complete this proof, close off the derivation line as shown below. Write
your conclusion, then indicate to the right all the lines involved in the proof:
2.
2. ~~Z
3. (W v P) /.: (X ⊃ W)
Consider, first, whether there is any way to derive the conditional using
only your rules of inference. In this case, there is not. X is in premise 1, but
it is not obviously connected with W. Therefore, you will need to construct
a conditional proof, assuming X is true:
Now consider whether you can derive W from the premises given. As it
turns out we can:
Now we see that, on the assumption that X, we can derive W. Now we are
ready to close off our derivation line, and draw our conclusion:
Here are two English examples and two more symbolic examples:
Indirect Proof/reductio ad absurdum
For some arguments, our first eight rules of inference will be inconclusive.
You may apply rule after rule and see no clear way to derive a conclusion.
When you run into this problem, it is time to try our next proof method,
Indirect Proof, also called reductio ad absurdum (reduction to absurdity).
You may have heard the phrase, “Let’s assume, for the sake of argument,
that X,” only to discover that the arguer is really trying to prove that X is
false. One way to do this is to show that X entails some sort of absurdity,
like a contradiction. This is precisely what a reductio ad absurdum attempts
to show. If some proposition, P, entails a contradiction, P is false. For
example, consider the following argument for the claim, “There are no
married bachelors”:
From the definition of “bachelor” and the assumption that there is at least
one person who is married and a bachelor, we can derive the contradiction
that someone is both married and unmarried. Since our assumption entails a
contradiction, we must conclude that it is false.
How do we know that a claim that entails a contradiction is false? There
are two reasons; one that is generally intuitive and one that is more
rigorous. First, logically we may derive the truth of any claim from a
contradiction. For instance, from the contradiction, “God exists and doesn’t
exist” (G & ~G), we can derive anything we want, including farfetched
claims like “Satan rocks” (S) and “Def Leppard is the greatest band of all
time” (D). Here’s how:
2.
2. G 1 simplification
3. (G v S) 2 addition
4. ~G 1 simplification
5. S 3, 4 disjunctive syllogism
So, if God both exists and doesn’t exist, we may derive that Satan rocks.
And we may continue:
6. (G v D) 1 simplification
7. D 4, 6 disjunctive syllogism
And we could just keep going. Contradictions entail the truth of every
claim. But surely this is wrong. Every claim cannot be true. Therefore, any
claim that entails a contradiction must be false.
The second, more rigorous, reason to think claims that entail
contradictions are false is that our definition of truth entails that
contradictions are false. A claim is true if and only if its negation is false.
This is evident from our truth table for the conjunction. If one or both sides
of a conjunction are false, the conjunction is false:
P&Q P & ~P
TTT T F FT
TFF F F TF
FFT
FFF
The truth table for conjunction entails that the consequent is false:
2. ~(Q & ~Q) premise (definition of truth)
3. ~P 1, 2 modus tollens
1. 2.
1. ((A v B) ⊃ C) 1. (H & P)
2. (F & D) 2. ((H v A) ⊃ C)
3. A 3. D /.: (C & D)
4. (C ⊃ (E v H))
/.: ((A v B) ⊃ (E v H))
3. 4.
1. (~P v D) 1. (~S ⊃ R)
2. (~D & B) 2. (~R v Q)
5. 6.
2. ~(A v B) 2. (O & P)
7. 8.
9. 10.
11. 12.
1. B 1. (~P & Q)
2. ((B v D) ⊃ ~H) 2. (Q ⊃ R)
13. 14.
1. ((M ⊃ O) v S) 1. (A ⊃ (D v F))
3. M 3. ((A ⊃ H) ⊃ B) /.: (F ⊃ B)
15. 16.
1. (X v Y) 1. (B & C)
4. Z /.: R
17. 18.
1. X 1. (P & (Q v R))
19. 20.
2. ~F /.: ~A 2. ((P ⊃ D) ⊃ H)
Even after you get the hang of applying valid rules of inference, you may be
tempted to draw inappropriate inferences. The most efficient way to avoid
this is to be aware of the most common mistakes. A mistake in reasoning is
called a fallacy. We will discuss fallacies in more detail in
Chapter 10, but in this chapter, we will explain three very common mistakes
in deductive reasoning.
1. (P ⊃ Q)
2. Q
3. ?
1. (P ⊃ Q) 1. (P ⊃ Q)
2. P 2. Q
3. Q 3. P
1.
Now, what might we conclude about the claim, “It is raining”? That the
sidewalks are wet makes it more likely that it is raining than no wet-
sidewalk-making event. But we cannot rule out other wet-sidewalk-making
events, such as a sprinkler system, someone with a water hose, melting
snow, a busted water pipe, a water balloon fight, and so on. Since “It is
raining” does not follow necessarily from the premises, the argument is
invalid.
It is important to remember that an invalid argument is not necessarily a
bad argument. All inductive arguments are invalid and we rely heavily on
those in many fields of great importance, including every scientific
discipline. Therefore, affirming the consequent is a formal fallacy in that, its
form does not preserve truth from premises to conclusion, and so must not
be employed in deductive arguments. Nevertheless, it is a useful tool in
scientific reasoning, as we will see in
Chapter 8.
Here are two more English examples:
2.
1. If it rained before
midnight, the bridge is
icy this morning.
2. The bridge is icy
this morning.
[Not necessarily. Even if the premises are true, it might have rained after
3. Therefore, it rained
midnight to the same effect. Or perhaps it snowed before midnight, melted,
before midnight.
and then became icy in the early morning hours.]
3.
1. If he’s coming with
us, I’ll get sick.
2. I’m getting sick.
[Not necessarily. Even if the premises are true, he might not be coming with
3. Therefore, he’s with
us. This is because there are many other reasons to be sick: bad sushi, a
us.
Leonardo DiCaprio film, the flu, etc.]
1. (P ⊃ Q) 1. (P ⊃ Q)
2. ~Q 2. ~P
3. ~P 3. ~Q
1.
1. If she’s going to the
party, Kwan will not
be there.
2. She’s not going to
the party.
[Not necessarily. Even if she’s not going, Kwan might have gotten sick or
3. Thus, Kwan will be
crashed his car, etc. There are lots of reasons Kwan might not be there even if
there.
the premises are true.]
2.
1. If the bulbs are lit,
they are getting
electricity.
2. The lights aren’t lit.
3. Hence, they are not [Not necessarily. The bulbs might be burned out. Even though they are
getting electricity. getting electricity, they would not be lit.]
Affirming the Disjunct
Our first two fallacies highlight ways that reasoning from conditionals can
be tricky. Our third common fallacy highlights a way that reasoning from a
disjunction can be tricky. Disjunctive syllogism tells us that, if we know
that a disjunction is true, (P v Q), and we know that one of the disjuncts is
true, say ~Q, we can infer the other disjunct, P. But there are times when,
instead of knowing the negation of one of the disjuncts, we know the
affirmation of one of the disjuncts:
1. (P v Q)
2. Q
3. ?
In this case, we cannot infer anything about the other disjunct. This is
because, in a true disjunction, at least one disjunct is true and both might be
true. So, knowing that one disjunct is true isn’t sufficient for inferring the
truth or falsity of the other disjunct.
Because of disjunctive syllogism and because we often think of
disjunctions as expressing an exclusive “or” (either P or Q, and not both),
we might be tempted to infer the negation of the other disjunct:
1. (P v Q) 1. (P v Q)
2. ~Q 2. Q
3. P 3. ~P
Replacing P and Q with English sentences helps to show why affirming the
disjunct is invalid:
1.
1. Either London is
the capital of
England or Madrid
is the capital of
Spain.
2. London is the
capital of England.
[Premise 1 is true because at least one disjunct is true. Remember, in logic, we
3. Therefore,
interpret disjunctive claims “inclusively”; both sides can be true. Therefore,
Madrid is not the
learning that London is, in fact, the capital of England does not provide reasons
capital of Spain.
to conclude anything about whether Spain is the capital of Madrid.]
2.
1. Toyota is a
Japanese company
or the earth is not
flat.
2. The earth is
certainly not flat.
3. Hence, Toyota is
[Again, both disjuncts in premise 1 are true, so it is impossible to conclusively
not a Japanese
derive any claim about Toyota.]
company.
3.
1.
1. He’s the president of the company or I’m a monkey’s uncle.
2. Here is the memo announcing that he is president.
3. So, I’m obviously not a monkey’s uncle.
2.
1. If I flip this switch, the light will come on.
2. I’m not flipping the switch.
3. Therefore, the light will not come on.
3.
1. It is either raining or storming.
2. It is certainly raining.
3. Thus, it is not storming.
4.
1. If you flip this switch, the light will come on.
2. The light is coming on.
3. Hence, you must be flipping the switch.
5.
1. If it drops below 0oC, either the roads will become icy or the
water line will freeze.
2. It is -5oC (below 0o).
3. So, either the roads will become icy or the water line will
freeze.
4. The roads are icy.
5. Therefore, the water line is probably not frozen.
Exercises
1. 2.
3. 4.
1. ((S & E) & (A v G) /.: (S v G) 1. ((A & B) ⊃ C) /.: ((A & B) ⊃ ((A & B) & C))
5. 6.
7. 8.
1. (P & (Q v R)) 1. (A ≡ B)
9. 10.
2. (H v (D & A)) 2. ~Z
3. ~H 3. ((T v P) ⊃ X)
4. (T ⊃ C) /.: (T ⊃ X) 4. Y /.: (T ⊃ P)
1. 2.
2. ~O /.: ~P
3. 4.
5. 6.
2. (T ⊃ X) 2. ((A v B) ⊃ T) /.: T
7. 8.
2. L /.: ~J 2. (F ⊃ H) /.: (G v H)
9. 10.
11. 12.
13. 14.
15. 16.
1. ((A v B) ⊃ (C v ~D)) 1. ((V ⊃ R) ⊃ (~F v V))
2. (A ≡ ~D) 2. (F ⊃ G)
17. 18.
19. 20.
1. (D ⊃ B) 1. (S ⊃ L)
4. ~( P & X)
1.
1. If the stars are out, you can either see Orion (a winter
constellation) or Aquarius (a summer constellation).
2. You can see either Orion or Aquarius (because you can
see Orion).
3. Therefore, the stars are out.
2.
1. We’ll see you this Christmas if God’s willing and the creek
don’t rise.
2. It is not the case that God is willing and the creek don’t rise
(because of the huge flood this year).
3. Therefore, we won’t see you at Christmas.
3.
1. There will be a labor strike unless the company changes its
benefits plan.
2. I just saw all the factory workers on strike.
3. Hence, the company will change its benefits plan.
4.
1. If things don’t change, it will either be bad for us or good for
your family.
2. But things seem to be changing.
3. Thus, it will either be bad for us or good for your family.
4. These changes aren’t looking good for your family.
5. Therefore, they aren’t looking good for us.
5.
1. Unless Nintendo does something fast, the Wii will be
outdated or at least cease to be competitive.
2. Nintendo has announced no immediate plans.
3. So, either the Wii will be outdated or cease to be
competitive.
4. No one else can compete with the Wii’s design.
5. So, it will be outdated.
Real-Life Examples
Consider the case of a checkout clerk at Walmart who puts her hands in the till and walks off
with a couple of hundred bucks of the company’s money. That clerk could expect to face
prosecution and jail.
Now consider her boss, who cheats her of hundreds of dollars of pay by failing to accurately
record the time she clocked in, or the overtime she worked. Maybe, just maybe, after the
worker risks her job to complain, she might get back wages. In rare cases, the company might
even pay a fine.
Every year, workers are cheated out of tens of billions of dollars of pay—more than larceny,
robbery and burglary combined. …
Even so, no boss ever goes to jail. But tell me, what’s the difference between a clerk putting
her hands in the till and the boss putting his hands on her paycheck? …
What’s the difference between stealing from the company and stealing from the worker? …
Until people are held personally and criminally accountable, banker fraud, like payroll fraud,
will continue.
1. If a checkout clerk steals money from a cash register, the clerk could
face prosecution and prison.
…
C: Therefore, America should send more people to prison.
2. What Is Real?
The following is a difficult passage from the Irish philosopher George
Berkeley. Berkeley thought it strange that we think there are objects
that exist outside of our minds, independently of whether they are
perceived in thought. In the passage below, Berkeley argues that what
it means to perceive something implies that whatever we are perceiving
is essentially bound up with that perception; precisely because of how
we understand perceiving, the things we perceive cannot exist
independently of perception.
I. Do your best to reconstruct Berkeley’s argument using
propositional logic. Even if you get stuck, this exercise will help
you focus on the important parts of the argument and to sift them
from the extraneous bits. It will also help you understand the
argument better.
II. Write a brief essay responding to Berkeley. Assume that you
disagree with his conclusion, and challenge the argument. Which
premise would you deny? Or which part of his reasoning
strategy?
*****
In
Chapters 7–
10, we will explain inductive reasoning in some depth. In
Chapter 7 we will explain the notion of inductive strength by
introducing some basic concepts of probabilistic reasoning. We will
also discuss a puzzle that plagues all inductive reasoning called the
problem of induction. In
Chapter 8 we will look at three common types of inductive argument
—generalization, analogy, and causation—and discuss their
strengths and weaknesses. In
Chapter 9 you will learn how scientific reasoning works and how it
helps to mitigate many of the weaknesses of basic inductive
arguments. And finally, in
Chapter 10 you will learn many of the ways inductive reasoning can
go wrong by learning a number of informal fallacies.
7
Probability and inductive reasoning
In this chapter, you will learn the basics of reasoning about probability claims. We will
start by expanding on our earlier discussion of inductive strength (
Chapter 1) by introducing strength-indicating words and the concept of
probability. We will distinguish probability from statistics, explain three types of
probability, and then explain some of the ways probability is used in inductive
reasoning.
Inductive Arguments
As with deductive arguments, inductive arguments are defined by their
structure. Unlike deductive arguments, no inductive arguments are valid.
This means that the conclusion will only follow from the premises with
some degree of probability between zero and 99.9999… percent. One way
to say it is that an inductive argument is ampliative (think, “amplify” in the
sense of “to add to”), that is, there is information in the conclusion that is
not found in the premises. In contrast, deductive arguments are non-
ampliative; their conclusions do not include any information that is not
already in the premises. To be sure, deductive conclusions may be
surprising and lead to new discoveries, but the point is that everything we
discover is true wholly because the premises are true. In inductive
arguments, our information is, in some sense, incomplete.
Consider these two examples:
In the argument on the left, the conclusion does not include any information
beyond what can be understood from the premises. We can use a Venn
diagram to see this more precisely (see
Chapter 3 for more on Venn diagrams):
All the things that are men are within the scope of (inside the circle of) all
the things that are mortal (recall from
Chapter 3 the blacked-out space means there’s nothing there). And all the
things that are Socrates are within the scope of (inside the circle of) things
that are men. Because of these premises, we know exactly where Socrates
fits. In order to draw this conclusion, we do not have to add anything to the
premises. Therefore, we call this a non-ampliative inference.
In the argument on the right, on the other hand, there is more information
in the conclusion than is strictly expressed in the premises. A Venn diagram
also makes this clear:
Inductive Strength
As we saw with the inductive argument about Socrates, there are some
imprecise quantifiers that suggest whether an inductive argument is strong,
such as “some,” “many,” and “most.” Consider the following two examples:
1. 2.
1. Most Americans are communists. 1. Some Americans are communists.
2. Osama Bin Laden is an American. 2. Osama Bin Laden is an American.
3. Osama Bin Laden is probably a communist. 3. Osama Bin Laden is probably a communist.
Most Some
Types of Probability
Probability is a measure of the plausibility or likelihood that some event
will occur given some set of conditions. The three primary sets of
conditions that probabilities express are (1) the way the world is (i.e., sets
of facts), (2) the evidence we have that the event in question will occur (i.e.,
sets of data), or (3) our personal assessment of the event’s occurrence given
the conditions we are aware of and our best judgment. Our goal in the
remainder of this chapter is not to explain in any depth how to calculate
probabilities (we leave that work for a math class), but to simply explain
what sorts of events probabilities allow us to talk about and how to use and
evaluate probabilities in arguments.
If we are interested in the probabilities associated with (i), we are
interested in the chances or objective probability of an event. For instance,
the way the world is determines the chances that a normal quarter will land
on heads when flipped. If the quarter has a rounded edge, is fairly weighted
on both sides, and is landing on a flat surface and gravity remains constant
(all of which prevent it from landing on its edge, favoring one side, or not
landing at all), then we say the chances of it landing on heads is one out of
two, or 50 percent. This doesn’t mean that every other time I flip a fair coin
it will land on heads. The objective probability is simply the chance the
coin will land on heads for any single toss. Similarly, if we draw a card
from a regular playing deck of fifty-two cards, the distribution of
denominations and suits objectively determines that the chances of drawing
an ace are four out of fifty-two, or 7.69 percent.
Alternatively, if we are dealt a card from a regular playing deck of fifty-
two cards, the world determines the chances the card dealt to us is an ace,
namely, 1 in 1 or 0 in 1, 100 percent or 0 percent. This is because the card
either is an ace or it isn’t. Once the card is in our hand, the world dictates
that we either have the card or we don’t. You may now wonder: if this is the
case, why do we bet on the chances of a card turned face down making a
good hand (as when we are playing a game like Texas Hold ‘Em)? Frankly
put, that’s not what we’re betting on. In that case, we bet, not on objective
chances of the card in our hand (we already know it either is or isn’t the
card we want), but on the evidence we have about the cards already in play.
We call this type of probability: epistemic probability.
Epistemic probability corresponds to condition (ii), the evidence we have
that an event will occur. It is a measure of the likelihood of an event given
our current evidence. Let’s begin simply, by imagining drawing a playing
card from a deck of fifty-two cards. Imagine you draw an ace:
What is the likelihood that the second card you draw will be an ace?
Once you’ve drawn, the objective probability that it is an ace is either 1 or
0. The order of the cards in the deck has already been determined by
shuffling the cards, so the card you choose is either an ace, P(1), or it isn’t,
P(0). But, since you do not know the order of the cards, you have to make a
decision about the likelihood the next card is an ace on the evidence you
have about the deck of cards and the card in your hand.
You know there are four aces in the deck (an ace for every suit), so, given
that you have one in your hand, three are left. You also know that there are
fifty-two cards in the deck, which, minus the one in your hand, leaves fifty-
one. So, the epistemic probability that the next card is an ace is three out of
fifty-one, also represented as 3/51, or roughly 5.8 percent.
Let’s try a more complicated example. Take, for example, a game of
Texas Hold ‘Em, where you have two cards in your hand, three cards face
up on the table, and two cards face down:
The goal of Texas Hold ‘Em is to make the best five card hand you can
using the two cards in your hand (which can be used only by you) and three
of the five cards on the table (which are shared by everyone playing the
game).
To keep things simple, let’s say that, in your hand you have two 9s:
In dealing the cards, the world (the order of the cards in the deck) has
already determined the number and suit of the face-down cards. But, in
order to survive the game without going broke, you need to figure out the
likelihood that the cards on the table will give you the hand you need to
beat the other players.
Now, one way to draw a really good hand given the cards on the table
and that you were dealt these 9s is to draw another 9 for three-of-a-kind.
What is the likelihood that there is another 9 lurking in the two cards that
are face down? The probability that any one of those cards is a 9 is
precisely two out of forty-seven, or around 4.2 percent. This is because you
have two in your hand, so only two 9s remain in the deck, there are no 9s
face up on the table, and there are forty-seven cards left in the deck (or in
play).
In fact, the epistemic probability is two out of forty-seven even if there
are other players with cards in their hands. The world has determined the
objective probability that one of the next two cards is a 9, whether those
cards are on top, further down in the deck, or in someone else’s hand. As far
as your evidence goes, you only know the values of five cards.
To calculate probability that one or the other of those cards is a 9, you
take the likelihood that any one card is a 9, given that there are two left
(2/47 or 4.2 percent) and multiply that by two (since the two cards were
drawn at random, this gives us the probability that any two cards drawn
randomly from the deck contains a 9). So, if any one draw is 4.2 percent
likely to be a 9, the likelihood that any two draws is a 9 increases your
chances to 8.4 percent.
Remember, 8.4 percent is not the objective probability that one of those
two cards is a 9. The objective probability has been settled by the world
independently of what we know about it. Each of those cards is a 9 or it
isn’t. In this case, 8.4 percent expresses an epistemic probability. And how
you bet depends on it.
So 8.4 percent is the probability that one of the two cards on the table is a
9. You should treat them just like random draws from the deck (since, if
you’re at a fair casino, that’s what they were). So, given this probability,
should you bet? Whether you should bet on an epistemic probability
depends partially on our third type of probability: credence. We will discuss
this in detail shortly. For the moment, imagine you do decide to bet and stay
in the game.
As it turns out, the first round reveals a 3 of diamonds:
Now there are only forty-six possible cards in the deck that could be a 9.
And since you know the remaining card came from the deck, the epistemic
probability that any one of the three cards is a 9 drops to the probability of
drawing a 9 randomly from a deck of forty-six cards with two 9s, that is,
2/46 or 4.3 percent.
Should you bet on this round? Again, whether you should depends on the
third major type of probability: credence or, subjective probability, which
corresponds to condition (iii). A subjective probability is a measure of how
likely you believe or feel the event will happen—it is similar to a gut-level
judgment. For instance, you can determine the objective probability of the
cards on the table by looking at them (don’t try this during a game). You
can determine the epistemic probability of the cards on the table by drawing
an inference from the number of cards on the table and those in your hand.
But neither piece of information can tell you whether to bet. Your belief that
the card you need is the one face down on the table is determined by
epistemic probabilities plus a variety of other factors, including the
likelihood that three 9s is a better hand than your opponents’, the amount of
money you will lose if you are wrong compared to the amount of money
you will win if you are right, and your willingness to take risks. Subjective
probability involves calculating the personal costs to you of betting.
For example, if after the 4 is turned over, one of your opponents at the
table bets a very large sum of money, this might affect your judgment about
the likelihood of winning, even though nothing about the objective or
epistemic probabilities have changed. She may be bluffing, but she may
have a good hand. For instance, she might have two jacks in her hand. Or
she might have one of the other 9s and is hoping one of the two cards is a 7
for a straight. If one of them is a 7, then even if the other is a 9, you still
lose (because a straight beats 3-of-a-kind). Obviously, weighing the
subjective probabilities gets harder as the number of players increases.
There is little evidence on which to calculate subjective probabilities
(unless your opponent has an obvious tell), but they are distinct from
epistemic probabilities. Nonetheless, credences are still usually expressed
as probabilities. If the weather person says there is a 70 percent chance of
rain tomorrow, and you trust them, then your credence will be that there is a
70 percent chance of rain. Note that you might trust the weather person
even if you have no other information about the weather—for example,
what the radar looks like, the science of weather forecasting, the objective
reliability of the weather person. This suggests that your belief that rain is
70 percent likely is, at best, a credence.
There are, to be sure, ways of forming rational probabilistic judgments
under this sort of uncertainty. You may rely on your beliefs about the
tendency of an opponent to bluff (which may involve some epistemic
probabilities), or about your own desire not to lose the money you have put
in (if you’re betting with pennies, you may be much more willing to take
risks than if you’re playing with hundreds of dollars). Other examples of
credences are less formal, and include beliefs like: “Based on my
experience (or my trick knee), I believe it will be a cold winter,” “Feels like
rain is coming,” and “You know those two, and it is highly likely that
they’ll get married.”
It is worth noting that, with betting games, there is at least one widely
accepted systematic method of calculating credences. Simply compare what
are called the “pot odds” (the amount you might win versus the amount you
have to bet) with the “card odds” (the likelihood that you will not get the
card you need versus the likelihood that you will). If the pot odds are higher
than the card odds, you have a reason to bet. The idea is this: If the payoff
odds are better than the odds you will lose, it is better to play out the hand
than to fold. (Some say that using this method “makes you more likely to
win,” but the “likely” here cannot refer to either objective or epistemic
probabilities for reasons that will soon become clear.)
So, let’s say the pot odds are 10 to 1, written: 10:1 (e.g., you have to pay
$1 to win $10). In addition, let’s say that your cards are the cards from
above. In this case, the card odds are 22:1 (2/46 reduces to 1/23; there is
one chance in 23 that you will get your card and 22 that you won’t, thus 22
to 1 against). Since the pot odds (10:1) are lower than the card odds (22:1),
you shouldn’t bet.
On the other hand, there are other ways to get a winning hand from this
set of cards. Since you have three hearts, then, instead of trying for three 9s,
you might instead try for two more hearts to make a flush. What is the
likelihood that the two face-down cards are hearts? With three hearts in
your hand, ten are left in the deck. The likelihood of getting one heart is
10/47, or 21.2 percent (37:10 odds, roughly, 4:1, 37 chances you won’t get a
heart, 10 that you will). Things are different now than before with the 9s.
With the 9s, you only needed to know the probability that one of two
randomly drawn cards is a 9. In this case, you need to know the likelihood
that both of two randomly drawn cards are hearts. Therefore, instead of
multiplying 21.2 percent by 2 (for two chances at one card), we would need
to multiply the likelihood of drawing one heart (10/47) by the likelihood of
then drawing a second heart (9/46), which is 90/2162, which reduces to
about 1/24. 1/24 translates to 23:1 odds against. Again, you should probably
fold this hand. Of course, things are much more complicated at a real card
table, since you must calculate (and pretty quickly) the odds of getting both
the 9 and the two hearts (about 12:1, by the way), since either will increase
your chances of winning.
You might ask why this method of calculating probability is categorized
as a credence instead of an epistemic probability. The answer is simple:
This calculation tells you nothing more about your chances of winning than
the card odds alone. The likelihood that you will get a certain set of cards
remains fixed given the organization of the deck regardless of how much
money is at stake. Nevertheless, whether this method tells you to bet
changes dramatically depending on how much money is at stake. With the
card odds, you have already calculated the epistemic probability. To see this
more clearly, imagine trying to roll one side of a twelve-sided die (11:1
against). Whether I give you 10:1 pots odds (you shouldn’t bet) or 1000:1
pot odds (you should bet), your chance of rolling your number is still 1/12,
or 8.3 percent. Comparing card odds with pot odds has proven useful to
many gamblers, but it ultimately does not affect your likelihood of winning.
For the purposes of constructing and evaluating arguments, and unless
you find yourself in a casino, it is helpful to focus only on epistemic
probabilities. Credences, as important as they are, are difficult to assess
rationally. You may bet me $1 that the next flip of this coin won’t be tails. If
I reason that, because the last five flips have been heads, I will take your
bet, my reasoning would be fallacious, though it “seems” right “in my gut.”
To consider this a worthy bet, my credence in the coin’s likelihood of being
tails must be different from yours. In this case, my credence can be traced to
poor reasoning (called the gambler’s fallacy, see
Chapter 8). But what accounts for your credence? Credences are
notoriously difficult to evaluate and are often fallacious, so, where we can,
our best bet (literally) is to reason as closely as possible with our epistemic
probabilities while keeping acutely aware of how much we value certain
outcomes. In other words, doing the math but keeping in mind what’s at
stake for us.
Similarly, when scientists evaluate the laws and events of nature, they are
investigating what they hope are objective probabilities—they want to know
what the world is like. Unfortunately, even scientists are limited to their
evidence. Scientists depend on evidence to learn about nature, so that
almost every probability they express depends wholly on the evidence they
have for that claim. We would love to have objective probabilities, but we
are stuck, most of the time, with epistemic probabilities and credences.
For more fun, mathematician Andrew Critch has a web page on how you
can improve your credence judgments. Check it out at:
https://acritch.com/credence/.
Conditional Probabilities
In one sense, all probabilities are conditional; the truth of a probabilistic
claim depends on its relationship to a set of conditions (we’ve described
those conditions as objective, epistemic, or subjective). But the phrase
conditional probability is reserved for probabilities that are conditional on
other probabilities. Probabilities that depend on other probabilities are
called, intuitively, dependent probabilities—a probability that depends on
(is affected by) the probability of another event. These are distinct from
independent probabilities, where a probability is not affected by the
probability of another event. The latter are easier to understand than the
former. If you roll a single die twice, the probability of rolling a 3 the
second time are exactly the same as the first, no matter what you rolled
first. The probability of drawing an ace from a deck of cards on the second
draw (after you put the first back and reshuffle) is the same as the first.
These are independent probabilities.
But imagine drawing a second card but not replacing the first before you
do. The probability that you will draw an ace on the second draw, as we saw
in some of the examples above, depends on what you drew the first time. If
you are looking for aces and you drew an ace first, then the probability of
drawing a second ace is 3 out of 51. If you drew something else first, then
your probability of drawing an ace second is 4 out of 51.
Dependent or conditional probabilities need not depend on completely
different events (drawing twice); they can depend on a variety of
conditions, too. For instance, imagine if someone across the room rolled a
die and told you the number is an even number. What is the probability that
she rolled a 6 given that you know the roll is an even number? Now, these
cases can be worked intuitively. There are three evens, only one of which is
a 6; so, given that the number is even, the conditional probability that she
rolled a 6 is 1/3, or P(0.333).
But what we really want to know is how to calculate such outcomes
when the examples are more complicated. We need a formula. Consider
three probabilistic events:
The vertical line “|” stands for the English phrase “given that”—it is not a
mathematical function like multiplication or division. The probability of
drawing an ace is 4/52. The probability of drawing an ace after we’ve
drawn an ace (subtract one ace from the deck and subtract one card from
the whole deck) is 3/51. So, value of the claim in 1 is (4/52 × 3/51) or
(1/221).
In the second case, things are more complicated. You’ve drawn one ace.
Now you want to know the probability of drawing a second. Now, on one
hand, we already know there are only three aces left and only fifty-one
cards to choose from. So, we can just calculate 3/51 to get P(0.059). But
what if we wanted to calculate it? We would need to take our answer to 1
and then show that we actually drew the first ace. To do that, we divide the
probability we got in 1 by the probability that we drew the first ace. So,
what we want is not P(A&B), but the probability of B given A: P(B|A). And
the formula for that is:
And this is just equal to P(0.5), the same as flipping any fair coin at any
time. Actually, this is a good test for whether the events are dependent or
independent. If P(B|A) = P(B), then the events are independent and so are
the probabilities.
Reasoning about probabilities is not easy, especially as the numbers get
larger. But we hope this primer will at least help you to avoid elementary
mistakes when thinking about probabilities. In the
next chapter we will take a look at some common fallacies that can occur
when reasoning with probabilities.
Choice 1: P(0.17) ×
(10) = 1.7
Choice 2: P(0.5) × The probability here is 0.5 because 1, 3, and 5 are half the options on a die. The
(8) = 4 win is only $8 because you paid $2 to play.
Even though the cost to play is higher, the probability of rolling the needed
number is higher, so the expected value is higher. The second choice is the
better option.
Now, let’s look at a less precise example. Imagine you are trying to
decide whether to get a puppy. You love dogs, and you know how to take
care of them, and you’re really excited to have a puppy in your life again.
However, you also work long hours and you like to travel, both of which
means you will need help getting the dog the exercise and support it needs.
Do you cave and go to the shelter, or do you refrain?
To calculate the expected values of indulging and refraining, first identify
the relevant values associated with those options. In this case, a puppy will
be a long-term companion, and you will have a lot of fun going to the park
and the lake and having it snuggle with you at night. But there’s a downside
to a puppy. They’re a lot of work to house-train, you’ll have to clean up
after it, and it will likely shed. The first year’s vet bills are expensive, and
you’ll have to pay a dog walker when you have to work late. You also want
the freedom to go out with friends after work and travel at a moment’s
notice, and kennels can significantly increase the price of travel. The values
are not strictly monetary, so you have to be a bit arbitrary with the numbers.
Let’s set the value of getting a puppy even given all the costs pretty high,
let’s say +10. Now, if you don’t get a puppy, you’ll likely have all the
benefits of freedom and travel, so let’s set that number pretty high, too: +9.
We can represent the relationships like this:
Expected values:
Looking at this, you realize that if you could get a puppy and keep all the
freedom and travel, that would be even better, so you set that value at +15.
And if you couldn’t do either—that is, you couldn’t get a puppy and you
lost out on freedom and travel, that would be pretty terrible. You set that at
–10. Filling in our graph with these numbers, we get this:
Expected values:
1.
2.
3. The laws of nature guarantee that the future will look like the past.
The conclusion of 2 follows validly from the premises, but we need some
reason to think premise 3 is true. We might do this with the following
argument:
3.
3. Therefore, the laws of nature guarantee that the future will look like the past.
Like 1, the conclusion of argument 3 does not follow from the premises
without assuming there is some reason to think there is a connection
between the way the laws have behaved and the way they will behave. We
might make this assumption explicit with another premise:
4.
3. Therefore, the laws of nature guarantee that the future will look like the past.
5.
And of course, an argument like this leaves us no better off than where we
began.
Taking stock, here is the problem: You seem clearly justified in believing
the sun will rise tomorrow. You should be as sure of this as almost anything
(really!). Yet, we do not directly experience that the sun will rise tomorrow
(we are not yet there), and any deductive argument for the claim that the
sun will rise tomorrow is circular (the past does not entail anything about
the future; and adding any premise to that effect assumes what we are trying
to prove). So, where do we go from here?
The type of circularity involved in the above deductive arguments is
called premise-circularity, which means that there will always be a
premise that explicitly restates the conclusion. A simpler example of a
premise-circular argument would be a child attempting to justify the claim,
“I need candy,” by simply stating “Because I need it.”
But what if we could construct an argument that does not require an
explicit premise about the past’s connection to the future? Philosopher of
science Wesley Salmon (1925–2001) offers an insightful example of this
sort of argument:
A crystal gazer claims that his method is the appropriate method for
making predictions. When we question his claim he says, “Wait a
moment; I will find out whether the method of crystal gazing is the
best method for making predictions.” He looks into the crystal ball
and announces that future cases of crystal ball gazing will yield
predictive success. If we protest [arguing from induction] that his
method has not been especially successful in the past, he might well
make certain remarks about parity of reasoning. “Since you have used
your method to justify your method, why shouldn’t I use my method
to justify my method? … By the way, I note by gazing into my crystal
ball that the scientific method is now in for a very bad run of luck.”2
6. 7.
1. Crystal-ball-gazing tells me crystal-ball-gazing is 1. Induction tells me induction is
reliable. reliable.
2. Therefore, crystal-ball-gazing is reliable. 2. Therefore, induction is reliable.
Taking stock once again, we find that, direct experience does not justify
induction, deductive arguments for induction are premise-circular, and
inductive arguments for induction are rule-circular. Therefore, induction is
an unjustified form of reasoning. If this argument is sound, science and all
other inductive reasoning processes aimed at justifying the truth of claims
(as opposed to, say, their usefulness) are in serious trouble.
Philosophers have offered a variety of solutions to the problem of
induction, but none are widely accepted. For example, American
philosopher Laurence BonJour (1943–) argues that there is a way of
justifying induction that is not considered by Hume, namely, inference to
the best explanation. We will take a closer look at inference to the best
explanation in
Chapter 9, but for now it is enough to note that BonJour argues that, among
competing explanations, the best explanation for induction’s past success is
that nature is uniform. Inference to the best explanation is a complicated
form of reasoning, and BonJour argues that it is justified a priori, that is,
independently of experiential evidence. We need no experience, either
directly of the future, or of the past, or of any connection between the past
and the future to justify induction. We may rely on our a priori evidence
that inferences to the best explanation are justified. And from this, we may
justify our belief in induction. To motivate this conclusion, we would need
to explain BonJour’s account of a priori justification, and how inference to
the best explanation is justified on non-experiential evidence. Do not
quickly dismiss it on account of our brief discussion. Many philosophers
(including one of the authors—Jamie) argue that our beliefs about the rules
of mathematics and logic are similarly justified.
American philosopher John Hospers (1918–2011) offers a defense of
induction very different from BonJour’s. Hospers concedes that arguments
for induction are circular, but argues that, in this case, circularity does not
constitute sufficient reason to reject induction. In fact, he argues, most of
our deductive rules suffer the same fate:
8.3 9.
1. If snow is white, modus ponens is valid. 1. Either modus ponens is valid or 2 + 2 = 5.
2. Snow is white. 2. It is not the case that 2 + 2 = 5
3. Therefore, modus ponens is valid. 3. Therefore, modus ponens is valid.
Exercises
1.
1. Many Muslims are Sunni.
2. Janet is a Muslim.
3. Therefore, Janet is Sunni.
2.
1. Most violent criminals are recidivists.
2. Joe is a violent criminal.
3. Therefore, Joe is likely a recidivist.
3.
1. At least a few lawyers are honest.
2. Nayan is a lawyer.
3. So, Nayan might be honest.
4.
1. Aces can be played high or low in blackjack.
2. In this game of blackjack, you were just dealt the ace of
spades.
3. Therefore, you can play it high or low.
5.
1. In most cases, weddings are nauseating affairs.
2. This wedding we’re invited to, will likely be a nauseating
affair.
6.
1. Some politicians are trustworthy.
2. Ayush is trustworthy.
3. Hence, he might be a politician.
7.
1. The islands are nice this time of year.
2. You’re going to Fiji?
3. It should be great!
8.
1. This restaurant has a reputation for bad service.
2. I don’t suspect this visit will be any different.
9.
1. All the watches I’ve worn break easily.
2. I can’t imagine this one will be different.
10.
1. Every swan scientists have studied have been white.
2. Scientists are on their way to study swans in Australia.
3. They will probably find only white swans there, too.
Real-Life Examples
*****
At this point, we should pause to note two features of this argument. First,
the argument does not say that the fine-tuning evidence proves that the
universe was designed, or even that it is likely that the universe was
designed. In order to justify these sorts of claims, we would have to look at
the full range of evidence both for and against the design hypothesis,
something we are not doing in this chapter. Rather, the argument merely
concludes that the fine-tuning strongly supports theism over the atheistic
single-universe hypothesis.
In this way, the evidence of fine-tuning argument is much like
fingerprints found on the gun: although they can provide strong evidence
that the defendant committed the murder, one could not conclude merely
from them alone that the defendant is guilty; one would also have to look at
all the other evidence offered. Perhaps, for instance, ten reliable witnesses
claimed to see the defendant at a party at the time of the shooting. In this
case, the fingerprints would still count as significant evidence of guilt, but
this evidence would be counterbalanced by the testimony of the witnesses.
Similarly the evidence of fine-tuning strongly supports theism over the
atheistic single-universe hypothesis, though it does not itself show that
everything considered theism is the most plausible explanation of the world.
Nonetheless, as I argue in the conclusion of this chapter, the evidence of
fine-tuning provides a much stronger and more objective argument for
theism (over the atheistic single-universe hypothesis) than the strongest
atheistic argument does against theism.
The second feature of the argument we should note is that, given the truth
of the prime principle of confirmation, the conclusion of the argument
follows from the premises. Specifically, if the premises of the argument are
true, then we are guaranteed that the conclusion is true—that is, the
argument is what philosophers call valid. Thus, insofar as we can show that
the premises of the argument are true, we will have shown that the
conclusion is true. Our next task, therefore, is to attempt to show that the
premises are true, or at least that we have strong reasons to believe them.
*****
Forecasts issued by the National Weather Service routinely include a “PoP”
(probability of precipitation) statement, which is often expressed as the
“chance of rain” or “chance of precipitation.”
EXAMPLE
ZONE FORECASTS FOR NORTH AND CENTRAL GEORGIA
NATIONAL WEATHER SERVICE PEACHTREE CITY GA
119 PM EDT THU MAY 8 2008
…
THIS AFTERNOON … MOSTLY CLOUDY WITH A 40 PERCENT
CHANCE OF
SHOWERS AND THUNDERSTORMS. WINDY. HIGHS IN THE
LOWER 80S. NEAR
STEADY TEMPERATURE IN THE LOWER 80S. SOUTH WINDS 15
TO 25 MPH.
.TONIGHT … MOSTLY CLOUDY WITH A CHANCE OF SHOWERS
AND
THUNDERSTORMS IN THE EVENING … THEN A SLIGHT CHANCE
OF SHOWERS
AND THUNDERSTORMS AFTER MIDNIGHT. LOWS IN THE MID
60S. SOUTHWEST
WINDS 5 TO 15 MPH. CHANCE OF RAIN 40 PERCENT.
What does this “40 percent” mean? … will it rain 40 percent of the time? …
will it rain over 40 percent of the area?
The “Probability of Precipitation” (PoP) describes the chance of
precipitation occurring at any point you select in the area. How do
forecasters arrive at this value? Mathematically, PoP is defined as follows:
PoP = C × A, where “C” = the confidence that precipitation will occur
somewhere in the forecast area and “A” = the percent of the area that will
receive measureable precipitation, if it occurs at all.
So … in the case of the forecast above, if the forecaster knows
precipitation is sure to occur (confidence is 100 percent), he/she is
expressing how much of the area will receive measurable rain. (PoP = “C”
× “A” or “1” times “0.4” which equals 0.4 or 40 percent.)
But, most of the time, the forecaster is expressing a combination of
degree of confidence and areal coverage. If the forecaster is only 50 percent
sure that precipitation will occur, and expects that, if it does occur, it will
produce measurable rain over about 80 percent of the area, the PoP (chance
of rain) is 40 percent. (PoP = 0.5 × 0.8 which equals 0.4 or 40 percent.)
In either event, the correct way to interpret the forecast is: there is a 40
percent chance that rain will occur at any given point in the area.
In this chapter, we introduce three types of inductive argument. You will see how
reasoners generalize from samples to populations and from past events to future ones.
You will see how comparisons between objects and events can be used to draw
inferences about new objects and events, and how causal relationships are either
appealed to or inferred inductively. In addition, we will explain some common mistakes,
or fallacies, committed when reasoning inductively.
Inductive Generalization
We often hear reports of new experiments about medical treatments, or
behaviors that will make our lives better:
“CRESTOR can lower LDL cholesterol up to 52% (at the 10-mg dose versus 7% with
placebo)” (www.crestor.com).
“Research has shown that regular aspirin use is associated with a marked reduction
from death due to all causes, particularly among the elderly, people with heart disease,
and people who are physically unfit” (“Heart Disease and Aspirin Therapy,”
www.webmd.com).
“A U.S. Department of Transportation study released today estimates that 1,652 lives
could be saved and 22,372 serious injuries avoided each year on America’s roadways if
seat belt use rates rose to 90 percent in every state” (www.nhtsa.gov).
1.
2.
1. Almost all the sixth-graders we interviewed love Harry Potter.
3.
1. 75% of 300 Labour Party members we interviewed said they approve of Bill X.
In this case, when the survey participant gets to question 2, they have just
considered the case in question 1. Because of this, they might answer
question 2 differently than if they had not considered the case. They might
not reject abortion in every conceivable case. The point is that, while the
questions really are testing for opinions about abortion, and so the survey is
valid, the method of gathering the information yields different results based
on an irrelevant factor, namely, the order of the questions. This means the
survey is unreliable.
To overcome ordering bias, researchers construct a series of different
surveys, where the questions are presented in different orders. If the sample
is large enough, the ordering bias will cancel out.
Other threats to reliability include confirmation bias (unintentionally
choosing data that favors the result you prefer) and the availability heuristic
(using terms that prompt particular types of responses, such as “Is the Gulf
War just another Vietnam?” and “Is the economy better or worse than when
we only paid $1.50 per gallon for gasoline?”). Try to construct testing
instruments that mitigate these biases. Otherwise, you will not know
whether your results are reliable.
One last word of caution: Notice in examples 1–3 above that the
conclusions do not claim more for their populations than the premises do
for their samples. For example, the premise “most 35-year-old test subjects”
implies something about “most 35-year-old men,” not about “all 35-year-
old men.” Similarly, the premise “75 percent of 300 Labour Party
members” implies something about “75 percent of all Labour Party
members,” not about “all Labour Party members.” Inferences drawn from
incomplete information are still beholden to the limits of the quantifiers in
the premises. A conclusion that generalizes beyond what is permitted in the
premises also commits the fallacy of hasty generalization (see
Chapter 10 for more on hasty generalization).
1.
1. Almost all politicians lie.
2. Blake is a politician.
3. Hence, …
2.
1. We surveyed 80% of the school, and all of them agreed
with the class president.
2. John goes to our school.
3. Therefore, …
3.
1. You haven’t liked anything you’ve tried at that restaurant.
2. Therefore, this time…
4.
1. Most of the philosophers we met at the conference were
arrogant.
2. And those two we met at the bar weren’t any better.
3. Oh no, here comes another. I bet…
5.
1. In the first experiment, sugar turned black when heated.
2. In experiments two through fifty, sugar turned black when
heated.
3. Therefore, probably…
6.
1. We surveyed 90% of the city, and only 35% approve of the
mayor’s proposal.
2. Thus, most people probably…
7.
1. Every time you have pulled the lever on the slot machine,
you’ve lost.
2. So, …
8.
1. 65% of 10% of citizens get married.
2. Terri is a citizen.
3. Hence, …
9.
1. Every girl I met at the bar last night snubbed me.
2. Every girl I met at the bar so far tonight has snubbed me.
3. Therefore, …
10.
1. All the politicians at the convention last year were jerks.
2. All the politicians we’ve met at the convention this year
were jerks.
3. Hence, probably, …
2,401 2%
1,067 3%
600 4%
384 5%
96 10%
Survey Methods
Results are based on telephone interviews with 1,033 national adults, aged 18 and older,
conducted March 26–28, 2010. For results based on the total sample of national adults, one
can say with 95% confidence that the maximum margin of sampling error is ±4 percentage
points.
Interviews are conducted with respondents on landline telephones (for respondents with a
landline telephone) and cellular phones (for respondents who are cell phone only).
In this Gallup poll, researchers remind readers that some biasing factors are
always possible. “Question wording” may refer to the possibility of
cultural, framing, or ordering bias, and “practical difficulties” may refer to
time constraints, self-selection (the sort of people who would answer an
interviewer’s questions), difficulties guaranteeing randomness, and
idiosyncrasies among members of the population. This reminder is helpful
to us all; even those of us who teach this material, since it is easy to take
statistics at face value.
How does all this help critical thinkers? In order to effectively evaluate
statistical data, we should first determine the degree to which they meet the
conditions of a good sample (random, proportionate, valid, reliable). We
should believe inferences drawn from this data only to the degree they meet
these conditions.
Statistical Fallacies
In addition to all the factors that can undermine representativeness, there
are a number of ways that we misuse statistics. Three very common
mistakes made when reasoning about stats are the regression fallacy, base
rate neglect, and the gambler’s fallacy.
4. John just found out that most car accidents happen within
two miles of where a person lives. Because of this, he has
decided to move at least two miles away from his current
apartment.
5. After watching Kareem win three tennis sets in a row, the
recruiter is sure that Kareem will be a good fit for his all-
star team.
6. “I touch my front door ten times every morning, and
nothing too bad ever happens to me. Therefore, if you
want your life to go well, you should start touching your
front door ten times before you leave.”
After acing her first exam, Simone was shocked to
7. discover that she only got an 85 on the second. She
concludes that she must be getting dumber.
8. “I am bracing for bad luck. My life has been going much
too well lately to keep this up. Something bad will have to
even it out at some point.”
9. “Ever since I bought this mosquito repellent, I haven’t been
bitten by one mosquito. It must really work.”
10. Almost every athlete who has appeared on the cover of
Sports Illustrated experienced a significant decline in
performance just after the issue is released. Fariq decides
to turn down an offer to appear on the magazine’s cover,
fearing that his stats will fall. (Example adapted from
Thomas Gilovich, How We Know What Isn’t So (New York:
The Free Press, 1991), pp. 26–7.)
Imagine, also, that you have had five other patients with the same
symptoms this week and each of these other patients had strep throat. What
can you conclude? Given the similarities in the symptoms, it seems safe to
conclude this patient also has strep throat.
How do doctors distinguish one disease from another? Imagine that you
just diagnosed this patient with strep throat, when another patient comes in
with all these symptoms, minus the white or yellow spots on the back of the
throat? Do you conclude that the patient does not have strep throat based on
this one dissimilarity? It can be difficult to tell because the symptom that
really sets strep throat apart from a cold or the flu is the spots on the throat.
In this case, you may want to perform a strep test, which is another
indicator of the disease.
Arguments from analogy typically have the following form:
1.
1. Watches are highly organized machines with many interrelated parts that all work together
for a specific purpose, and they were designed by an intelligent person.
2. The mammalian eye is an organized machine with many interrelated parts that all work
together for a specific purpose.
3. Therefore, it is likely that the mammalian eye was also designed by an intelligent person.
2.
1. This instrument fits well in a human hand, has a sharp blade, a hilt to protect the hand, and
was designed by humans to cut things.
2. This rock fits well in a human hand, has what appears to be a blade, and what appears to be
a hilt to protect the hand.
3.
1. My old Ford had a 4.6 liter, V8 engine, four-wheel drive, a towing package, and ran well
for many years.
2. This new Ford has a 4.6 liter, V8 engine, four-wheel drive, and a towing package.
Consider example 1 above, comparing watches and cells. There are many
more dissimilarities between watches and cells than similarities. For
example, watches are made of metal and glass, are designed to tell time,
have hands or digital displays, have buttons or winding stems, have bands,
are synthetic. Cells have none of these features. In fact, many things are
often different in as many ways as they are similar. Consider two baseballs
(Figure 8.1). Even though they share all the same physical properties
(shape, size, weight, color, brand, thread, etc.), they may be dissimilar in
dozens of other ways:
Figure 8.1 Baseball Analogy
Consider, again, the analogy between watches and cells, perhaps having a
“purpose” or a “goal” more strongly implies an intelligent designer than
having interrelated parts. If this is right, then noting that both watches and
cells have a goal makes the argument for design stronger than noting that it
has interrelated parts. In fact, having interrelated parts may be irrelevant to
design.
How can we determine whether a feature of an object or event is relevant
to the feature we are interested in? There is no widely accepted answer to
this question. One approach is to ask whether the feature in question is
“what you would expect” even if you weren’t trying to build an argument
from analogy. For example, you wouldn’t expect for any particular
computer, that it was owned by your brother. So that’s clearly an irrelevant
feature. But you would expect complicated, interrelated parts to exhibit
purpose. So, complicated, interrelated parts are not irrelevant to the purpose
of a watch.
Of course, this is a pretty weak strategy. If you find a rock in the
wilderness with a handle-shaped end, it’s hard to know what to say. While
you wouldn’t necessarily expect to find a handle-shaped end, it is certainly
not unusual to find rocks with funny-shaped protuberances. And one such
protuberance is a handle-shaped end. Does this mean that the rock is more
or less likely to have been carved into a tool? We can’t say. Thus, this
approach isn’t very strong.
A more promising strategy is to gather independent evidence that there is
a causal relationship between certain features of an object or event and the
feature being inferred.
Consider example 3 from above:
3.
1. My old Ford had a 4.6 liter, V8 engine, four-wheel drive, a towing package, and ran well
for many years.
2. This new Ford has a 4.6 liter, V8 engine, four-wheel drive, and a towing package.
That a vehicle has a 4.6 liter, V8 engine may be irrelevant to whether that
vehicle will run for many years. We can imagine a wide variety of 4.6 liter,
V8s of very poor quality. It doesn’t seem plausible to find independent
evidence linking the features “4.6 liter, V8” with “will run for many years.”
But it does seem plausible to think that certain 4.6 liter, V8s, for instance
those made by Ford, could run longer than others. We can imagine
gathering evidence that the 4.6 liter, V8s made by Ford during the years
1980–1996 have an excellent track record of running for many years. From
this evidence we could draw an analogy between the old Ford and the new
Ford based on their relevant similarities, reformulating 3 as 3*:
3*.
1. My old Ford had a 4.6 liter, V8 engine, was made in 1986, and ran well for many years.
3. This new Ford has a 4.6 liter, V8 engine, and was made in 1996.
1.
1. Bear paw prints have five toe marks, five claw marks, and
an oblong-shaped pad mark.
2. This paw print has five toe marks, five claw marks, and an
oblong-shaped pad mark.
3. Thus, …
2.
1. My mug is red with “Starbucks” written on it, and a chip on
the handle.
2. This mug is red with “Starbucks” written on it, and has a
chip on the handle.
3. So, this is…
3.
1. The jeans I’m wearing are Gap brand, they are “classic fit,”
they were made in Indonesia, and they fit great.
2. This pair of jeans are Gap brand, “classic fit,” and were
made in Indonesia.
3. Therefore, …
4.
1. Alright, we have the same team and coach we won with
last year against the same team.
2. We’re even wearing the same uniforms.
3. It is likely that…
5.
1. At the first crime scene, the door was kicked in and a
playing card was left on the victim.
2. At this crime scene, the door was kicked in and there is a
playing card on the victim.
3. Therefore, …
6. There are several stones here. Each stone has a broad
head, sharp on one side. These stones are similar to tools
used by a tribe in another region of this area. Therefore,
these stones….
7. Everything I have read about this pickup truck tells me it is
reliable and comfortable. There is a red one at the dealer. I
want it because….
8. This bottle of wine is the same brand, grape, and vintage
as the one we had for New Year’s Eve. And that bottle was
great, so, ….
9. This plant looks just like the edible plant in this guidebook.
Therefore, ….
10. All the desserts I have love have ice cream, chocolate
syrup, and sprinkles. This dessert has ice cream, chocolate
syrup, and sprinkles. Therefore, ….
Causal Arguments
A causal argument is an inductive argument whose premises are intended
to support a causal claim. A causal claim is a claim that expresses a cause-
and-effect relationship between two events. Some examples include:
Notice that not all causal claims include the word “cause” in them. As long
as a claim implies that one object or event brings about another, it is a
causal claim. In examples 3 and 4, the words “elected” and “cured” imply
that the subjects of the claims (respectively: Texans, penicillin) did
something to bring about the objects of the claims (the governor’s
appointment, the elimination of the infection).
The concepts of cause and effect are the source of much philosophical
strife. Because of this, reasoning about causes is one of the most difficult
things philosophers and scientists do. For our purposes, we will use a fairly
common-sense notion of a cause.
It is important to note that just because a claim has the word “because” in
it does not make it a causal claim. For example, none of the following
because claims are causal claims:
Positive Correlation
In this example, the data points represent people who write songs. They are
placed on the graph according to the number of songs they wrote (vertical
axis) and the number of years they lived (horizontal axis). If the data points
fall roughly along a diagonal line that extends from the bottom left of the
graph to the top right, there is a strong positive correlation between the
events—as instances of one increase, so do instances of the other. The more
tightly the points fall along this line, the stronger the correlation. Of course,
even when you discover a strong correlation, there may be outliers.
Outliers are data points that do not conform to the correlation line. If there
are too many outliers, the correlation is not strong.
In a negative correlation (Figure 8.3), the frequency of one event
decreases as the frequency of another event increases. Consider the case of
increased coffee sales and reduced allergy attacks. If we compare how
much coffee is sold each month with how many allergy attacks are reported
during those months, we might get the following graph:
Negative Correlation
In this example, the data points represent months of the year. They are
placed on the graph according to how many pounds of coffee are sold
(vertical axis) and how many allergy attacks are reported (horizontal axis).
If the data points fall roughly along a diagonal line slanting from the top left
of the graph to the bottom right, there is a strong negative correlation
between how much coffee was sold and how many allergy attacks
happened. Just as with positive correlations, the more tightly the points fall
along this line, the stronger the correlation.
Now, the question is: Does correlation indicate anything with respect to
causation, that is, does it help us answer the question about whether one of
these events caused the other? Without further evidence, it would seem not.
There is no obvious causal relationship between writing songs and long life
or drinking coffee and allergic reactions. It certainly seems strange to think
that allergic reactions might reduce coffee drinking. Nevertheless, it is
possible and can be useful for suggesting that a causal relationship exists
somewhere to explain the correlation. For example, maybe it has an indirect
effect: During bad allergy seasons, people go out for coffee less frequently.
Or, perhaps causation runs the other way, and coffee has some antihistamine
effects we were unaware of. Perhaps writing songs greatly reduces stress,
and therefore, the chances of heart disease. We would need to do more work
(namely, scientific work) to find out if any of these hypotheses is plausible.
The most that a strong correlation implies (absent any additional
experimental conditions) is that the events are not coincidentally related. If
a correlation is strong, it is unlikely that it is a function of mere chance
(though possible). When we reason about causal events, we rely on a
certain regularity in nature, attributed most often to natural laws (gravity,
inertia, etc.). That regularity helps us predict and manipulate our reality: we
avoid running near cliffs, walking in front of buses, we press the brake
pedal when we want to stop, we turn the steering wheel when we want to
avoid a deer in the road, and so on. The fact that these relationships
between our behavior and the world seem to hold regularly often leads us to
believe that two events paired in increasing or decreasing intervals under
similar conditions are related causally. But even these strong correlations do
not tell us where the causal relationship is located.
The cautionary point here is to resist believing that two events that
regularly occur together stand in any particular causal relationship to one
another. This is because two events may occur regularly together, yet not
imply any sort of causal relationship. The rising of the sun is paired
regularly with my heart’s beating at that time, waking up is often paired
with eating, certain lights turning green is often followed by moving cars,
and so on. But none of these events (that we know of) are causally related.
My heart could stop without the slightest change in the sun’s schedule; I
could wake without eating and sit still at a green light. Thus, causal
relationships are more likely in cases of strong positive or negative
correlations than simple regularities, but they aren’t guaranteed.
It is also true that the problem of induction (see
Chapter 7) raises an important problem for assuming the regularity of
nature. But the assumption that nature is consistently regular is so
fundamental to the way we reason, it is difficult to ignore. For this reason,
when we discover a strong correlation, like that stipulated in our two
examples above, we tend to believe that chance or coincidence (option (4)
in each example above) is less plausible than a causal relationship of some
sort. Therefore, a strong correlation simply indicates that we should
investigate further the apparent causal relationship between A and B.
Temporal order is the order events occur in time. An event at two o’clock
precedes an event at three o’clock. It is true that a cause cannot occur after
its effect in time. Rain today doesn’t help crops last month, and a homerun
next Saturday won’t win last week’s game. It is also true that not all causes
precede their effects; some are simultaneous with their effects. For instance,
a baseball’s hitting a window causes it to break, but the hitting and the
breaking are simultaneous. However, it is sometimes tempting to think that
because one event precedes another, the first event causes the second,
especially if those events are paired often enough.
Consider classic superstitions: walking under a ladder or letting a black
cat cross your path gives you bad luck; wearing your lucky socks will help
your team win; not forwarding those emails about love or friendship will
give you bad luck. These superstitions often arise out of our desire to
control reality. If you win a game, you might begin looking for some cause
that you can use to help you win the next game. “It just so happens that I
wore my red socks today; that might be the reason we won!” If you win the
next game wearing the same socks, you might be tempted to think the
superstition is confirmed and that wearing your red socks causes you to
play better.
But it should be clear that just because one event regularly precedes
another doesn’t mean the first event causes the second. This is a fallacious
inference known as post hoc, ergo propter hoc (after the fact, therefore
because of the fact). For example, in typing this book, we regularly place
consonants just before vowels. We do it quite often; perhaps more often
than not. But even if we do, that is no indication that typing consonants is
causally related to typing vowels. Similarly, traffic signals turning red are
often followed by stopping cars. But surely no one would believe that red
lights cause cars to stop. At best, the reason people stop at red lights is that
there are laws requiring them to, and people do not want to be fined for
refusing to stop. Therefore, that event A occurs before event B does not
imply that A caused B. To determine whether A caused B, we need more
information, which we will talk about in the
next chapter.
We just noted that we should not think that simply because one event
precedes another that the first causes the second. But what about events that
seem to occur together in clusters? Many of us have learned a new word or
have begun thinking about a vacation only to suddenly begin hearing the
word or the destination very frequently. It seems an amazing coincidence
that, after learning the word, “kibosh,” Jamie began hearing it practically
everywhere: on television, in a book, in a conversation at the next table in a
restaurant. Surely, learning the word doesn’t cause all these instances. And
there’s no reason to think the “universe” (or some other mysterious force) is
talking to us. In fact, there are some good explanations for why these
coincidences happen that are not mysterious at all.
Of course, surely, there is some causal force at work. And as it turns out,
there is. The phenomenon of encountering something repeatedly after
learning it or recognizing it for the first time is known in psychology as the
Baader-Meinhof Phenomenon or the frequency illusion. These are versions
of a tendency to ascribe meaning to an otherwise coincidental set of events,
a tendency called synchronicity. For example, if you are planning a big life-
changing move across country, you will likely start to notice references to
that place on billboards or on television or in songs. The explanation is that,
because the move is significant for you, your mind is primed to recognize
anything associated with that place in a way that it wouldn’t normally be.
Synchronicity is a causal process, but this is not the sort of cause we
typically attribute in cases like these. We are often tempted to think there is
something intentional behind them, that fate or God is trying to tell us
something, or that someone is speaking from beyond the grave.
But coincidences are a regular part of all our lives, and there is little
reason to believe that any particular causal force is at work to bring them
about. The mistake is in thinking that because some events are improbable,
then some independent causal force must be at work to bring them about.
But it is not obvious that this is the case. Consider this: Probabilities are
calculated either by counting past occurrences of events (on average, boys
have performed better in college than in high school) or by identifying the
disposition of an object to act a certain way (a fair, two-sided coin will land
heads about 50 percent of the time). These probabilities help us reason
about objects, but they do not dictate precisely how an object will act.
For example, imagine flipping a quarter fifty times and writing down the
result each time. You might discover that one segment of your results looks
like this:
HTHHTTHTHTHHHTTTTTTTTTTTTTTTTTT
HHT
Notice the long string of tails in the middle. Since the probability of a coin
landing heads is around 50 percent every time you flip it, this result is
unexpected and the probability that it would happen if we flipped another
fifty times is very low. Nevertheless, there is no reason to believe there are
any causal forces at work beyond the act of flipping the coin. For any string
of flips, any particular pattern could occur, each fairly improbable when
compared with the tendency of a coin to land heads about 50 percent of the
time. The same goes for drawing letters out of a bag of Scrabble letters.
Drawing any particular long string, like this one
aoekpenldzenrbdwpdiutheqn
aoekpenldzanrbdwpdiutheqn
Are these special? Not likely, the chances of two, short words with no
particular significance is not very interesting.
Consider, on the other hand, a different string of letters drawn from a
Scrabble bag:
fourscoreandsevenyearsago
This particular string is no more or less probable than the above string (it
has the same number of letters), but there is something unique about this
string. It appears to have a purpose that the former string does not, namely
to communicate in English the first six words of the Gettysburg Address. If
we drew these letters in this order from the Scrabble bag, we should
probably suspect that something fishy has happened. Why? It is not simply
because the probability is low, but because there is something peculiar
about this low probability order—it means something in a way that few
other low probability orders could. This sort of special meaning is called
specificity. So, coincidence or low probability alone is not enough to make
a causal judgment. When low probability is combined with specificity, you
have at least a stronger reason to believe there is a particular causal force at
work.
Mistaking coincidence for causation has been motivated a host of new
age religious ideas that seek to explain coincidence in terms of supernatural
causal forces. We found the following paragraph from
www.crystalinks.com using a random online search for “synchronicity”:
We have all heard the expression, “There are no accidents.” This is
true. All that we experience is by design, and what we attract to our
physical world. There are no accidents just synchronicity wheels, the
wheels of time or karma, wheels within wheels, sacred geometry, the
evolution of consciousness in the alchemy of time.
1.
1. I have let go of this pen 750 times.
2. Every time, it has fallen to the floor.
3. Thus, …
2.
1. As I press the gas pedal on my truck, it accelerates.
2. As I release the gas pedal, the truck decelerates.
3. Therefore, …
3.
1. In the past, when I thought about raising my arm, it raised.
2. So, …
4.
1. On the mornings I drink coffee, I am more awake than on
the mornings I drink nothing.
2. Similarly, on the mornings I drink black tea, I am more
awake than on the morning I drink nothing.
3. But on the mornings I drink milk, I am not more awake than
drinking nothing.
4. Therefore, …
5.
1. The label says gingko biloba increases energy.
2. I have taken gingko biloba every day for two months.
3. I notice no increase in energy.
4. Thus, …
Exercises
1.
1. We interviewed 1% of Londoners, and the vast majority
approve of the prime minister’s job performance.
2. Therefore, probably all England approves.
2.
1. Both students I asked said they would rather have a
different lunch menu.
2. I agree with them.
3. Therefore, probably the whole school would agree.
3.
1. Almost everyone I know believes smoking cigarettes is
unhealthy.
2. Also, the editors of this magazine say it is unhealthy.
3. Thus, almost everyone nowadays believes smoking is
unhealthy.
4.
1. We gave our product to members of 150 fraternity houses
across the nation.
2. 85% of fraternity men said they like it and would use it
again.
3. Therefore, probably 85% of people would like our product.
5.
1. This whole grove of ponderosa pine trees has developed
disease X.
2. Probably, all ponderosas are susceptible.
1.
1. Our college has a basketball team, a sports arena, and two
head coaches, and we’re number 1 in the nation.
2. Your college has a basketball team, a sports arena, and
two head coaches.
3. Therefore, your college is probably also number 1 in the
nation.
2.
1. That guy is 6’1” tall, has brown hair, was born in
Tennessee, and has cancer.
2. I am 6’1” tall, have brown hair, and was born in Tennessee.
3. So, I probably have cancer.
3.
1. That object is round, inflatable, white, and used for
volleyball.
2. This object is round, inflatable, and striped. (weather
balloon)
3. It follows that it is probably used for volleyball.
4.
1. Last semester I took a philosophy course with Dr. Arp in
room 208 and it was super easy.
2. The philosophy class Dr. Arp is offering next semester is
also in room 208.
3. This implies that that class will be super easy, as well.
5.
1. The last book I read by that author had a male protagonist.
2. That book was over 600 pages and terribly boring.
3. Her new work also has a male protagonist and is at least
as long.
4. Therefore, it will probably be terribly boring.
1.
1. Every time I have worn this ring, my choir performances
are excellent.
2. Therefore, this ring is responsible for my excellent
performances.
(So, I’m definitely wearing this ring during the next
performance.)
2.
1. I always get nervous just before I go on stage.
2. So, nervousness causes me to perform publicly.
(In that case, I should definitely stop getting nervous, so I
won’t have to perform.)
3.
1. That girl is here at the library every time I come in.
2. She must be interested in me.
4.
1. The last three times I played golf, my knee hurt.
2. And today, while playing golf, my knee is hurting.
3. Golf must cause knee problems.
5.
1. As it turns out, people who write a will tend to live longer.
2. Hence, writing a will leads to long life.
(So, if you want to live longer, you should probably write a
will.)
6.
1. Acne increases significantly from age 10 to age 15.
2. So, aging must cause acne.
7.
1. Every time Dr. Watson teaches a class, someone fails.
2. Therefore, Dr. Watson’s teaching leads to failures.
8.
1. I see the same woman at the traffic light on my way to work
every morning.
2. Therefore, fate has determined that we should be together.
9.
1. Every time I let the dog out, he uses the bathroom.
2. So, maybe if I stop letting him out, he’ll stop using the
bathroom.
10.
1. Interestingly, religious belief seems to decline with the
number of academic degrees a person has.
2. Thus, education causes atheism and agnosticism.
Real-Life Examples
1 The formula for calculating this is complicated. It requires applying a formula known as Bayes’s
Theorem to the probabilities noted. For those interested, it looks like this:
Substituting, we get:
The probability of a positive result if you have the disease is 99 percent and the probability of
having the disease is 1 in 100,000,000 or 0.000000001. What’s the probability of a positive test
result, period? We can treat it as the frequency with which a positive result will come up in a
population of 300,000,000, which includes those who have the disease and those who don’t. The
test will catch 99 percent of disease instances in the three with the disease or 2.97 cases. In the
remaining 299,999,997 people, it will produce false positives 1 percent of the time, or
2,999,999.97. Add those together and you get 3,000,002.94 out of 300,000,000, which is a
shade over 1 percent (0.0100000098).
Bottom line: If the test says you have the disease, it is very likely that you don’t. Unless you
calibrate your risk-taking to a finer grain than about one chance in a million, the test provides no
useful information. (Thanks to Robert Bass for this example.)
2 Steven Novella, “Airborne Settles Case on False Advertising,” March 26, 2008, Science-Based
Medicine.org, http://www.sciencebasedmedicine.org/airborne-admits-false-advertising/.
9
Scientific experiments and
inference to the best explanation
We explain the basic structure of scientific reasoning and how explanatory arguments
work. We introduce the concepts of observation, hypothesis, and test implication,
explain how to distinguish control and experimental groups, and discuss standard
types of formal and informal experiments along with their strengths and weaknesses.
We also discuss an argument strategy known as inference to the best explanation that
can help us identify the most plausible explanation among competing explanations.
1. If H, then I. 1. If H, then I.
Santa Test
1. If (H) Tylenol relieves pain, then (I) taking Tylenol will relieve
my headache.
2. (Not I) Taking Tylenol did not relieve my headache.
3. Therefore, (not H) Tylenol does not relieve pain.
Both premises seem true and the argument is valid, and yet the conclusion
seems false. What has happened?
The problem is that reality is more complicated than our original
hypothesis (or even our seventh or fifteenth). In any experiment, there are
sometimes hidden variables, or features of reality we weren’t originally
aware of, that are relevant to our hypothesis. For instance, there may be
conditions under which Tylenol just doesn’t work, whether for a specific
kind of pain (there are many) or because of person’s body chemistry. This,
however, doesn’t mean that it doesn’t relieve pain in most instances. There
are just a limited number of times that it doesn’t, for instance, neurologic
pain. Sometimes we can identify these cases and make our hypothesis more
precise (e.g., If H under conditions C1, C2, and C3, then I). We can also
make I more specific: “Tylenol will relieve the pain from a small cut.”
Other times we must rest content with the probability that it will relieve
pain given the vast number of times it has in the past. The problem of
hidden variables explains why even disconfirmation must be treated
inductively and why many experiments must be conducted before we are
justified in claiming that a hypothesis is confirmed or disconfirmed with
high probability.
A further problem is that we might simply have chosen the wrong test
implication. It may not be that H is false, but that H doesn’t imply I.
Consider our Santa case. If Santa has magic dust that dampens the noise
from reindeer hoofs, then Santa could have brought the gifts even though
we didn’t hear hoofs during the night.
Because of these problems, we cannot evaluate models of confirmation
and disconfirmation in deductive terms. We must treat them
probabilistically. We call those cases where Tylenol does not relieve pain
disconfirming evidence—a reason to believe it doesn’t relieve pain. But one
test is not the final word on the matter. Once we’ve run several dozen tests,
if the number of times it relieves pain significantly outnumbers the times it
doesn’t, then our total evidence is confirming: we have a reason to believe it
is highly likely that Tylenol relieves pain.
We can now state our definitions of confirmation and disconfirmation
clearly:
3. Therefore, probably (H & AH & I). 3. Therefore, probably not (H & AH & IC).
Of course, including these complicating features has an important effect
on the probability of H. We cannot be sure whether our test has revealed
results about H as opposed to AH or IC. In order to rule them out and
increase the likelihood that we have really tested H, we need to devise some
experimental models that include certain types of controls. An experimental
control is a way of holding our assumptions (AH and IC) fixed while we
test H. They are not foolproof, and we often cannot control for very many
of them. But science’s powerful track record suggests that we do a fairly
good job of testing the relevant hypothesis. To see how these work, we will
look at a set of formal and informal experiments.
C. Short answer.
Formal Experiments
Because of their subject matter (and because they often receive more
research money than philosophers!), scientists have the luxury of
conducting formal experiments. Formal experiments are highly structured
experimental procedures that help us control for as many assumptions as
possible. When you think of a laboratory experiment, you are thinking of a
formal experiment. But formal experiments can include careful observation
in the field, surveys and polls, and comparing data from a set of
experiments. They are typically constructed on one of three models:
Either I… or not I…
2. subjects between the ages of twenty-five and thirty- 2. subjects between the ages of twenty-
five, five
and who are at least 20 lbs overweight, and Thirty-five, do not lose at least 10 lbs
lose at least 10 lbs over a two-month period over a two-month period.
Therefore, either H is confirmed and… or H is disconfirmed and…
3. Emaci-Great does not cause weight
3. Emaci-Great causes weight loss.
loss.
Prospective Study
Let’s say that Emaci-Great includes an ingredient that is controversial in the
researchers’ country, so the government agency in charge of such things
would not approve tests on Emaci-Great. There is another country where
the drug is legal and used regularly, but it would cost far too much to
conduct a randomized study in that country. So, for practical reasons,
researchers choose a prospective study.
Researchers begin by gathering a group of people that meet the control
conditions for weight, diet, age, and so on, and who also already use the
main ingredient in Emaci-Great. This is the experimental group. They then
match the experimental group with a group that is similar in control
conditions (weight, diet, age, etc.), but that does not use the main ingredient
in Emaci-Great. This is the control group. Researchers then watch the
participants over a specified period of time and evaluate whether there is
greater weight loss in the experimental group than in the control group.
In this type of experiment, participants obviously know whether they are
taking the ingredient, so the placebo effect cannot be controlled for.
However, researchers can let their assistants choose the groups and record
the data so that the researchers can evaluate the results without knowing
which participant was in which group. This is known as a single-blind
experiment. The researchers are blind, even though the participants are not.
In some cases, even single-blind experiments are impractical. For instance,
some psychologists use prospective studies to track the effects of abortion
on anxiety. In cases, it may be difficult to identify relevant control groups
under single-blind conditions.
The hypothesis is the same as in the randomized study: Emaci-Great
causes weight loss. The test implication is also the same: subjects between
the ages of twenty-five and thirty-five, and who are at least 20 lbs
overweight, will lose at least 10 lbs over a two-month period. If the
experimental group loses significantly more weight than the control group,
we have strong evidence that Emaci-Great is an effective diet drug. If the
experimental group does not lose much weight, or the results are
inconclusive, researchers either start over with a new test population, or
move on to another drug.
So, in a prospective experimental study, researchers (1) choose an
experimental group with a relevant set of control factors (A, B, C, and D)
plus the hypothesized cause, X; (2) match this group with a control group
that has the relevant set of control factors (A, B, C, and D), but not X; (3)
observe the groups over a specified time; and (4) compare the results with
respect to the relevant effect, Y.
Retrospective Study
Imagine, now, that there are significant moral concerns about diet pills in
general. Dozens of people taking a variety of diet pills, including
participants in Emaci-Great studies, have developed severe stomach ulcers.
Developers of Emaci-Great, concerned that the main ingredient of their
product might be harming people, commission a study to determine the
extent to which Emaci-Great is linked to the ulcers.
Researchers are commissioned to look back (hence “retrospective”) over
the lifestyles of people who developed ulcers and try to identify something
they all have in common. In this type of experiment, the idea is to start from
an effect instead of a cause, and, by controlling for a variety of factors,
accurately identify the cause.
In this case, researchers choose an experimental group of subjects who
both used to take diet pills and developed ulcers, controlling for various
other factors, especially type of diet pill (along with weight, diet, age, etc.).
They then match this group with a control group with similar control factors
(type of diet pill, weight, diet, age, etc.) but without the ulcers. They then
attempt to identify something present in the ulcer group not present in the
non-ulcer group.
In this case, researchers begin with a test implication: many people who
take diet pills develop ulcers. And instead of using their imaginations to
develop a hypothesis, researchers begin looking for differences between the
experimental group and the control group. If they discover an additional,
non-diet-pill-related feature in the ulcer group (for instance, the
experimental group also took large doses of ibuprofen), then diet pills,
including Emaci-Great, would no longer be a moral concern. On the other
hand, if researchers discovered a higher ulcer rate in participants who took
diet pills with the main ingredient in Emaci-Great, then developers of
Emaci-Great may need to pursue a different product line.
So, in a retrospective experimental study, researchers (1) choose an
experimental group with a relevant set of control factors (A, B, C, and D)
plus an effect, Y, that needs an explanation; (2) match this group with a
control group that has the relevant control factors (A, B, C, and D), but not
Y; and (3) look for something, X, that appears in the experimental group but
not the control group that might explain Y.
Informal Experiments
When formal experiments are not practical for identifying causes and
evaluating causal arguments (because non-scientists rarely receive funding
for experiments), there are a handful of informal experiments at our
disposal. Philosopher John Stuart Mill (1806–1873) discovered five simple
informal tests for causes. Because of their widespread influence, they have
become known as Mill’s Methods:
1. Brad’s house: Jan ate hot dogs, sat on shag carpet, pet Brad’s cat,
then began sneezing.
2. Dean’s house: Jan ate hamburgers, sat on mohair couch, pet
Dean’s cat, then began sneezing.
3. Rachel’s house: Jan pet Rachel’s cat, ate quiche, sat on a wooden
chair, then began sneezing.
4. Brit’s house: Jan ate a soufflé, pet Brit’s cat, then began sneezing.
5. All cases of sneezing were preceded by Jan’s petting a cat.
6. Therefore, Jan is probably allergic to cats.
In this case, there is only one feature common to all the cases, that is, there
is only one event on which all cases agree (hence, the method of
“agreement”), and that is Jan’s petting the cat.
The method of agreement has the following general form, though the
number of cases may vary:
The method of agreement is limited to cases where there is only one feature
that agrees among all the cases. If more than one feature agrees, you will
need to use method number 3, The Joint Method of Agreement and
Difference, to identify the cause.
2. The Method of Difference
Another way to explain some event, E, is to identify a set of conditions or
events that preceded E and a similar set of events that did not precede E,
and if there is only one feature present in the case where E occurs that is not
present in the case where it doesn’t, that feature is likely to be the cause.
The Method of Difference is also similar to the retrospective study; we
begin with an implication and look back over cases to identify a cause.
Consider Jan’s sneezing, again. Imagine Brad had set up the experiment
in the following way:
1. Brad’s house on Monday: Jan ate hot dogs, sat on the shag carpet,
pet Brad’s cat, then began sneezing.
2. Brad’s house on Friday: Jan ate hot dogs, sat on the shag carpet,
but did not begin sneezing.
3. Jan began sneezing only after she pet the cat.
4. Therefore, it is likely that Jan is allergic to cats.
In the case, there is only one feature different between the days Jan visited
Brad, that is, there is only one feature on which the cases differ (hence, the
method of “difference”), and that is her petting his cat. Therefore, petting
the cat is probably the cause.
The method of difference has the following general form, though, again,
the number of cases may vary:
1. Brad’s house: Jan ate hot dogs, sat on shag carpet, pet Brad’s cat,
then began sneezing.
2. Dean’s house: Jan ate hot dogs, sat on mohair couch, pet Dean’s
cat, then began sneezing.
3. Rachel’s house: Jan ate quiche, sat on a wooden chair, and didn’t
sneeze.
4. Brit’s house: Jan ate hot dogs, sat on shag carpet, and didn’t
sneeze.
5. All cases of sneezing were preceded by Jan’s petting a cat.
6. In cases where Jan didn’t pet a cat, Jan didn’t sneeze.
7. Therefore, Jan is probably allergic to cats.
This example is slightly trickier. Notice that we need both premises 5 and 6
in order to conclude that petting the cat caused the sneezing. This is because
eating hot dogs was also present in both cases of sneezing. So, to conclude
the relevant feature is the cat and not the hot dog, we need a premise that
eliminates hot dogs. Premise 6 does this because it was also present in
premise 4, but did not precede sneezing. Similarly, sitting on shag carpet
preceded sneezing in one case, but not in another. The only event that
occurred when sneezing was present and did not occur when sneezing was
absent was Jan’s petting the cat.
The Joint Method of Agreement and Difference has the following general
form, though the number of cases may vary:
1. Brad’s house on Monday: Jan eats one hot dog, sits on the shag
carpet for fifteen minutes, pets the cat twice, and sneezes four
times.
2. Brad’s house on Tuesday: Jan eats four hot dogs, sits on the shag
carpet for thirty minutes, pets the cat twice, and sneezes four
times.
3. Brad’s house on Wednesday: Jan eats one hot dog, sits on the
shag carpet for twenty minutes, pets the cat four times, and
sneezes ten times.
4. Brad’s house on Thursday: Jan eats four hot dogs, sits on the shag
carpet for thirty minutes, doesn’t pet the cat, and doesn’t sneeze.
5. As the frequency of eating hot dogs or sitting on the carpet
changes, the frequency of E remains constant.
6. As the frequency of petting the cat increases, the frequency of the
sneezes increases.
7. As the frequency of petting the cat decreases, the frequency of the
sneezes decreases.
8. Therefore, the frequency changes in sneezing is caused by the
frequency changes in petting the cat.
This example is more complicated, but if you read through it closely, you
will see that, even though the frequency of eating hot dogs or sitting on the
carpet goes down, the sneezing either remains the same or increases.
However, the frequency of the sneezing goes up or down as the frequency
of petting the cat goes up or down. This allows us to identify the cat as the
cause, even though both of the other events were present in all cases.
1. You get sick after eating lobster for the first time and
conclude that it probably was the lobster.
2. “Why in the heck are there dead worms on the front porch
for the past three weeks I have taken out the trash for
trash collection on Monday mornings,” you think to yourself
as you step over a dead worm to place the trash in the bin
located in your driveway. Then, you remember that it had
rained heavily for the past three Sunday nights, which you
figure brought out many worms, that subsequently died on
the porch because they could not get back to the soil.
3. Susan has to weigh her cat at the vet, but the cat won’t sit
still on the scale by herself. So, the nurse records Susan’s
weight first, which is 120 pounds. Then she has Susan and
her cat step on the scale, notes that the scale now reads
130 pounds, and records the cat’s weight as 10 pounds.
Which of Mill’s methods did the nurse utilize?
4. Al, Ben, and Courtney go out to eat for dinner and have
the following:
Al and Courtney both get sick and vomit all night long. Why do
you think so? And which of Mill’s methods did you use to arrive at
the conclusion?
5. Zoe sneezed every time she went into the basement. Her
parents tried to figure out what was causing it by
vacuuming, dusting, and scrubbing the floors, in various
combinations, and having her go in the basement
afterward. Zoe still sneezed, no matter if the basement
was: vacuumed, but not dusted or scrubbed; dusted, but
not vacuumed or scrubbed; scrubbed but not vacuumed or
dusted; vacuumed and dusted, but not scrubbed;
vacuumed and scrubbed, but not dusted; dusted and
scrubbed, but not vacuumed; vacuumed, dusted, and
scrubbed. One thing that stayed the same throughout the
vacuuming, dusting, and scrubbing events, however, was
that the fabric softener sheets (which gave off a strong lilac
smell) were present every time Zoe went into the
basement. Zoe’s parents then removed the fabric softener
sheets and sent Zoe into the basement. Finally, she
stopped sneezing! They put the fabric softener sheets
back, and guess what happened? She sneezed again.
They have since stopped using the fabric softener sheets
and Zoe no longer sneezes when she goes into the
basement. So, from this whole ordeal, Zoe and her parents
reasoned that the fabric softener sheets were what caused
the sneezing.
2. I. 2. I.
A classic case from the history of science illustrates this problem well. In
the 1700s, experiments on the nature of heat led to a wide array of
explanations. One powerful explanation was the Caloric Theory of Heat,
which states that heat is an invisible, liquid-like substance that moves in and
out of objects much like water, moving from areas of high density to low
density, following a path of least resistance. The Caloric Theory was
incredibly useful, allowing scientists to explain why air expands when
heated, why warm drinks cool when left on a cool table in cool air, the
radiation of heat, and from the theory we can deduce almost all of our
contemporary gas laws.
A competing powerful explanation was the Kinetic Theory of Heat (or
“Kinetic-Molecular Theory”), according to which solids and gases are
composed of tiny molecules or atoms in motion colliding with one another.
The faster the molecules collide with one another, the more energy that is
expended. Heat is simply the expended energy of molecular motion. This
theory was also incredibly useful in explaining the phenomena the Caloric
Theory explains.
So, which is the better explanation? For years, researchers didn’t know.
Eventually, the debate was settled in the Kinetic Theory’s favor, but until
then, researchers were stuck evaluating the virtues of the theories
themselves.
Many were skeptical of the Kinetic Theory because it involved
introducing atoms or molecules as scientific objects. Since we cannot see
molecules and they do not intuitively act the way we experience heat acting
(flowing from one object into another, radiating from objects, etc.), the
Kinetic Theory requires a big change in our previous beliefs about the
nature of reality.
On the other side, the material “caloric” was no more directly observable
than molecules. And the view must be combined with a few extra physical
laws in order to explain the motion of some gases and the speed of sound.
Nevertheless, these additions made the theory quite precise and was used to
make predictions even after the Kinetic Theory made it obsolete.
Now we find ourselves with an interesting philosophical puzzle. We have
two inductive arguments, both apparently strong and both consistent with
all available evidence. In cases like this, we say that the theories are
underdetermined by the data, that is, the data we have is not sufficient for
choosing one theory over the other. To resolve this problem of
underdetermination, rather than looking for additional evidence, we can
turn to some of the features of the explanations themselves.
Explanatory Virtues
The actual number of explanatory virtues is unsettled, but there are six that
philosophers widely agree on:
1. Independent Testability
2. Simplicity
3. Conservatism
4. Fecundity (Fruitfulness)
5. Explanatory Scope
6. Explanatory Depth
Independent Testability
As we have seen, a hypothesis formulated to explain an observation does
not become plausible simply because we were able to think of it. There
must be some implication of the hypothesis that will allow us to confirm or
disconfirm that hypothesis. But we can’t choose something that’s already
built into the hypothesis. Imagine trying to test the hypothesis that seatbelts
save lives. One way to test this would be to see how many people die in
crashes who were wearing seatbelts. But this number alone wouldn’t do it.
It is dependent on the hypothesis you’re trying to prove. From the fact that,
say, very few people who wear seatbelts die in car crashes, you cannot
conclude anything about the effectiveness of seatbelts. It could be that cars
are just really, really safe and seatbelts don’t really play a causal role. To
know whether seatbelts save lives, you would need an independent test. For
example, you would need to compare the number of people who wear
seatbelts and die in car crashes with the number of people who don’t wear
seatbelts and die in crashes. This hypothesis can be independently tested.
Hypotheses that famously cannot be independently tested are things like
“only empirical claims are trustworthy” and “some cancers are cured by
miracle.” A dependent test tells you what you should expect if your
hypothesis is true (e.g., crash deaths with seatbelts are X percent of non-
deaths with seatbelts), but it doesn’t tell you whether your hypothesis is
true. An independent test tells you whether your hypothesis is true.
Simplicity
Simplicity is a very old theoretical virtue that was expressed famously by
William of Ockham (1288–1348) in the phrase, “Plurality must never be
posited without necessity” (from Sentences of Peter Lombard). Ockham
took certain arguments as conclusive that something supernatural was
responsible for the creation of the universe, namely, the Christian God. He
offered this phrase in response to those who asked how he knew there was
only one supernatural being and not two or dozens. The idea is that, if one
is good enough, it is more likely to be true than two or a dozen; the others
are superfluous. It is a principle of economy, and it is as widely accepted
and used today as it was in the Middle Ages. The fewer mechanisms or
laws a hypothesis needs in order to explain some observation, the simpler
that hypothesis is.
The idea is that, if two hypotheses explain the same observation, but one
invokes two laws while the other only invokes one, the hypothesis with
fewer laws is more likely to be true. Simplicity motivated much of the
Newtonian revolution in physics. Newton’s theory could explain most
observed motion with only three laws compared to the dozens of laws and
conditions required in earlier theories.
Conservatism
A theoretical virtue that keeps investigation stable is conservatism. An
explanation is conservative if accepting it requires that we change very little
about our previous beliefs. Often, new and exciting theories will challenge
some of our previous beliefs. This virtue tells us that the fewer beliefs we
have to change the better. In their famous, The Web of Belief (1978),
philosophers W. V. Quine and J. S. Ullian give an excellent example of how
conservatism works:
There could be … a case when our friend the amateur magician tells
us what card we have drawn. How did he do it? Perhaps by luck, one
chance in fifty-two; but this conflicts with our reasonable belief, if all
unstated, that he would not have volunteered a performance that
depended on that kind of luck. Perhaps the cards were marked; but
this conflicts with our belief that he had no access to them, they being
ours. Perhaps he peeked or pushed, with help of a sleight-of-hand; but
this conflicts with our belief in our perceptiveness. Perhaps he
resorted to telepathy or clairvoyance; but this would wreak havoc
with our whole web of belief.1
Notice that, in this case, we are forced to change at least one of our previous
beliefs. But which one? The least significant is the most plausible weak
spot. Quine and Ullian conclude, “The counsel of conservatism is the
sleight-of-hand.”
4. Fecundity (Fruitfulness)
An explanation is fecund, or fruitful, if it provides opportunities for new
research. Science and philosophy make progress by taking newly successful
explanations and testing them against new implications and applying them
in new circumstances. If an explanation limits how much we can investigate
the hypothesis, it should be preferred less than an explanation that does not
limit investigation. A classic example of a hypothesis thought to limit our
research capabilities comes from the field of Philosophy of Mind.
“Substance dualism” is the view that mental states (beliefs, desires,
reasoning) are products of a non-physical substance called a “mind” or
“soul.” “Materialism” is the view that mental states are simply products of
physical human brains. Many have argued that materialism is much more
fecund than substance dualism. Since brains are subject to neurological
research and souls, for example, are not subject to any further research,
many researchers conclude that dualism’s explanation of mental states is
“impotent” compared with materialism’s. If this is correct, materialism is a
better explanation than dualism because it has the virtue of fecundity,
whereas dualism does not.
Explanatory Scope
An explanation’s explanatory scope, also called its generality, is the
number of observations it can explain. The more observations a hypothesis
can explain, the broader its explanatory scope. The hypothesis that metal
conducts electricity to explain our observation that a small bulb lights when
connected to a battery by a metal wire, also explains why lightning is
attracted to lightning rods, why electricity travels through natural water that
is laden with metallic minerals (but not purified water), and why the
temperature of wire rises when connected to a power source. The
explanatory scope of this hypothesis is limited, however. It cannot explain
why smaller wires become warmer than larger wires when attached to the
same current. It also cannot explain why some materials do not conduct
electricity, like glass and purified water. Nevertheless, if one hypothesis has
more explanatory scope than another, the one with the broader scope is a
better explanation.
Explanatory Depth
An explanation’s explanatory depth is the amount of detail it can offer about
the observations it explains. The more detailed the explanation, the richer
its explanatory depth. For example, pre-Darwinian evolutionary biologist
Jean-Baptist Lamarck hypothesized that variation among biological species
(a giraffe’s long neck, a zebra’s stripes) could be explained in terms of two
mechanisms operating on organisms through events that happened to a
particular generation of organisms while they were alive. This generation
then passed this new feature on to the next generation. So, a giraffe’s long
neck can be explained by the fact that each generation of giraffe had to
reach higher and higher to eat leaves off certain trees. Each generation
stretched its neck slightly and then passed on this stretched feature to the
next generation. Unfortunately, except for suggesting some principles of
alchemy, Lamarck offered few details of how this could occur.
Charles Darwin, on the other hand, argued that each generation passed on
some variation to its offspring, but these traits were not a result of any
particular events that happened to that generation. Instead, he suggested that
the variations were random and offered a single mechanism, natural
selection, to explain how these variations were preserved or eliminated
from generation to generation. The introduction of a single, detailed
explanatory mechanism gave Darwin’s theory much more explanatory
depth than Lamarck’s.
Exercises
C. Set up your own Mill’s Method to test the next five causal
claims.
Real-Life Examples
1. A Murder Mystery
In their famous book Web of Belief (1978: 17), philosophers W. V. O.
Quine (1908–2000) and J. S. Ullian (1930- ) offer the following example
of evidence in need of explanation:
“Let Abbott, Babbitt, and Cabot be suspects in a murder case. Abbott has an alibi, in the
register of a respectable hotel in Albany. Babbitt also has an alibi, for his brother-in-law
testified that Babbitt was visiting him in Brooklyn at the time. Cabot pleads alibi, too,
claiming to have been watching a ski meet in the Catskills, but we only have his word for
that. So we believe.
But presently Cabot documents his alibi—he had the good luck to have been caught by
television in the sidelines at the ski meet. A new belief is thus thrust upon us:
2. Experimental Models
Read the following two excerpts and answer the questions that follow.
• In the 1700s, scurvy plagued many British seamen, and various cures
were tried, with unclear results, from vinegar, to sulphuric acid, to sea
water, nutmeg, cider, and citrus fruit. On the 1747 voyage of the
warship Salisbury, an exasperated navel surgeon named James Lind
decided to find a cure that worked:
Lind chose a dozen sailors out of the three dozen that were then
suffering from scurvy. To make his test as fair as he could, he tried to
pick men whose illness seemed to be at about the same stage. Then
he divided them into six pairs and gave each pair a different
treatment. The pair being given oranges and lemons made a good
recovery; those taking cider, acid or brine did not fare so well. It was
not a perfect randomized clinical trial by today’s standards, but it did
the job. Scurvy, we know now, is caused by lack of vitamin C, so
oranges and lemons are a sensible treatment.
Let us take out of the Hospitals, out of the Camps, or from elsewhere,
200 or 500 poor People, that have Fevers, Pleurisies, etc. Let us
divide them in halfes, let us cast lots, that one half of them may fall to
my share, and the other to yours; I will cure them without
bloodletting … we shall see how many Funerals both of us shall
have.
Passages excerpted from Tim Harford’s Adapt: Why Success Always Starts
with Failure (Farrar, Straus and Giroux, 2011), pp. 122 and 121,
respectively.
1 W. V. Quine and J. S. Ullian, The Web of Belief (New York: McGraw-Hill, 1978), p. 67.
10
Informal fallacies
Understanding the variety of ways arguments can go wrong helps us construct better
arguments and respond more effectively to the arguments we encounter. Here, we
distinguish informal fallacies from formal fallacies and then explain eighteen informal
fallacies.
1. You win the lottery (antecedent) only if you play the lottery
(consequent).
2. You play the lottery (affirming the consequent).
3. Hence, you win the lottery (concluding to the antecedent).
Not necessarily! This conclusion does not follow; just try playing
the lottery and you’ll see how much money you waste not
winning the lottery!
We also saw that generalizing from a sample that is too small does not
support the conclusion strongly enough for a good inductive argument—the
generalization is hasty:
Formal fallacies are mistakes in the form of an argument whereby one of the rules associated
with that argument form has been violated, and a conclusion has been drawn inappropriately.
Another way to say this is that, in formal fallacies, a conclusion does not follow from a
premise or premises because the argument’s form or structure is wrong. The argument’s
content (whether its claims are true or false, vague, etc.) is irrelevant.
Informal fallacies are mistakes in the content of the claims of an argument—that is, mistakes
in either the meanings of the terms involved (e.g., ambiguity, vagueness, presumption) or the
relevance of the premises to the conclusion—and a conclusion has been drawn
inappropriately. Another way to say this is that, in informal fallacies, a conclusion does not
follow from a premise or premises because there is a problem with the argument’s content,
not its structure or form.
Formal fallacies can be detected regardless of what terms are substituted for
the variables. For instance, no matter what terms are substituted for the
variables in a case of affirming the consequent—a distortion of the valid
form called modus ponens—the form is fallacious. Consider these formally
fallacious arguments:
1. If P, then Q. 1. If it’s a cat, then it’s a 1. If it’s icy, the mail is late.mammal.
Not necessarily! These conclusions do not follow. You can see this clearly when the Ps and
Qs are filled in. It’s a mammal; therefore, it’s a cat? No way! The mail is late; so, it’s icy. Not
necessarily; the mail truck could have a flat, or the regular mail person could have been
replaced by a slower substitute.
1. A
2. B
3. C
4. D
Clearly, the conclusion doesn’t follow deductively from the premises. The
form is invalid. But it still might be a good argument if the argument is
intended to be inductive. With inductive arguments, you can’t tell one way
or the other if the conclusion follows strongly from the premises by looking
only at the form of the argument. But once you look at the content, you can
evaluate the argument. This argument actually might be:
1. Drug X lowered cholesterol in 1,000 studies in the UK.
2. Drug X lowered cholesterol in 1,000 studies in Germany.
3. Drug X lowered cholesterol in 1,000 studies in the United States.
4. Therefore, Drug X will likely lower your cholesterol if you take it.
The conclusion would seem to follow with a high degree of probability and,
if the premises were true, this would be a cogent argument (recall from
Chapter 2 that a cogent argument is good inductive argument which is
strong and has all true premises). Now consider this argument:
Remember from
Chapter 8 that the argument above is a generalization because it draws an
inference from a sample of a population to the whole population—the
sample this one study performed in Europe; the population is anyone taking
the drug. In this case, the generalization is hasty, because the sample size is
too small. There would need to be hundreds of trials, if not thousands,
before we could draw a qualified conclusion that drug X lowers cholesterol.
But, we could not have known whether it was fallacious by merely looking
at the argument’s form.
Remember, just because an argument has a false premise, or false
premises, does not mean it is fallacious. Consider the following argument in
the deductive realm of reasoning:
Since the conclusion receives strong support from the premises (“most”),
the argument is not fallacious. The premises are relevant to the conclusion,
and they do not mislead you about what the terms mean. You might not
know what a “Republican” or a “liberal” is, but there is nothing misleading
about the words or the structure of the claims—they have clear meanings.
However, it is not a good argument, since the first premise is false: It turns
out that most Republicans are not liberal.
Informal fallacies can happen for a lot of reasons, so they are not as easy
to detect as formal fallacies. With formal fallacies, again, all you need to
learn are a set of rules about what makes a form correct; then, if an
argument does not follow those rules, the arguer has committed a formal
fallacy.
However, there are numerous ways that arguments can go wrong
informally. We can’t learn them all, but there are a handful that pop up on a
regular basis. If we get a good grasp of these, we will be less likely to
commit them and less likely to fall for them. We will be in a much better
position to reason well.
In this chapter, we will spend time looking at some very common
fallacies and some of their variations. As a refresher, always start evaluating
an argument by asking:
• “I had a lousy steak at Floyd’s Diner. I’m never going back again.”
Even good chefs have bad nights. One bad meal does not make a
bad restaurant.
• “Look, there’s no way I’m going to the West Side of town at night.
Remember last time that your phone was stolen. And I value my
phone.”
Unless there are other reasons for thinking the West Side has a
crime problem, there is no reason to suspect that someone is
any more likely to have something stolen there than anywhere
else.
We don’t know enough about this study. But even if it were large
and representative of its target population, it is only one study.
Many scientific claims have been retracted after subsequent
studies overturned early findings.
• “I noticed that when it’s warm outside, there are more criminal acts in
the city. Therefore, the warm weather causes crime!”
• “Senator Edwards has cheated on his wife. So, I don’t believe a word
that comes out of his mouth.”
• “That doctor is an arrogant man. There is no way he knows what he is
talking about.”
• “Dr. Wilson believes all sorts of crazy things. So, all of his arguments
must be taken with a grain of salt.”
Mr. Chauvinist:“The evidence just does not support the claim that women are paid less than men
for the same jobs.”
Ms. Feminist: “As a man, Mr. Chauvinist cannot possibly know what he is talking about.”
• “It is unlikely that Senator Wilkins can really help the poor because
he comes from an Ivy League, oil family.”
• “Dr. Perkins cannot possibly know anything about the origins of the
universe since he is a dyed-in-the-wool atheist.”
• “Mr. Obama’s complaint about the rising price of arugula shows just
how disconnected he is from his working-class constituency. His
high-brow, idealist, Ivy-League values are so far from blue-collar
struggles that it is implausible to think he will do anything to help
low-income families.”
● Officer: “Did you know you were going 20 miles over the speed
limit.”
Driver: “Yes, but officer, you had to go at least that fast to catch
me. So, since you would be a hypocrite to give me a ticket, can
I assume I’m off the hook?”
● Counselor: “I had an abortion and I have regretted it. I don’t think
you should do it.”
Appeal to Snobbery/Vanity
Someone who appeals to a group of people to justify a claim need not
appeal to all people or even a very large group. She may appeal only to a
select group. It might be a group you want to be a part of or a group you
don’t want to be a part of. For instance, if an advertisement appeals to
“luxury” or “elegance” or something “fine,” it is trying to convince you that
you need to be a part of a special group, that you are elite or a VIP. The
arguer is appealing to your vanity to get you to do something or believe
something. For example, “All university professors believe that progressive
policies are better for society.” This claim implies that you should believe
that progressive policies are better for society, since an envied class
(intelligent, informed, dapper university professors) believes they are (well,
maybe not because they’re “dapper”).
Similarly, if an arguer says something like, “Only an amateur beer-
drinker would drink Pabst Blue Ribbon” or “No one of any intelligence
would think that gay marriage is morally permissible,” he is appealing to
your sense of superiority over amateur beer-drinkers or unintelligent people.
These arguments all commit the fallacy of appeal to snobbery or
vanity. If an arguer associates a belief or action with an attractive or
unattractive group in order to persuade you to accept or reject the belief or
perform or not perform the action, the arguer has committed an appeal to
snobbery/vanity. A good argument does not depend on whether a group is
well regarded or ill regarded, whether you feel superior to that group, or
whether something strokes your ego. None of these are good reasons to
accept or reject a belief. Be careful that you are not wooed by these tactics
—some people will tell you anything you want to hear to get you to join
their bandwagon.
Now, this fallacy is easy to mistake for an ad hominem, circumstantial
fallacy. Remember, that fallacy associates a person (the arguer) with an
unsavory character or set of circumstances or group in order to convince
you to reject his or her claims. The appeal to snobbery/vanity associates an
idea or action with a savory or unsavory group in order to convince you to
accept or reject it or its claims.
Some textbooks include “appeals to celebrity” in this category. The idea
is that, if a celebrity tries to get you to believe something—for example, if
George Clooney tries to convince you that Global Warming is true—you are
more likely to believe it. We take it that an appeal to celebrity is not an
appeal to snobbery or vanity, but an appeal to inappropriate authority
(we’ll talk about this soon). This is because appeals to celebrity are
typically about individual celebrities, and not the category of celebrities.
This is a minor point, though, and secondary to your recognizing that
appealing to someone’s fame or celebrity as support for a belief is
fallacious, regardless of how you categorize that fallacy.
Look at the three more examples of appeal to snobbery/vanity:
• “Your answer on the exam was wrong. Why? I’m the professor,
you’re the student, and I’m the one handing out grades here.”
• “I’d hate to see your career come to an end because you showed the
President of sales what the Vice President of sales is up to; so, adjust
the numbers.”
• “I will let you go. But first I want you to say, ‘I love crepes.’ ” [In
other words: If you don’t say, “I love crepes,” I will break your arm.
From the film, Talladega Nights.]
• “You’ll support what we’re saying as being true; you wouldn’t want
us to lose our land, would you?”
• Student: Please re-think my grade.
Professor: But your response did not answer the question.
Student: You have to change my grade. My mom will kill me if I
flunk English.
• “Don’t fire Jim, even if he is incompetent. He has a huge family to
provide for.”
It still could be the case that Joe committed the crime, but that he was good
at getting rid of any evidence, or others were lousy at exposing or finding
the evidence. We’ll qualify this kind of reasoning in a minute, though.
• You don’t have any evidence he isn’t cheating you. But you’d better
start looking.
• Student: Ronald Reagan was a socialist.
Professor: How did you arrive at that conclusion?
Student: Show me the evidence that he wasn’t one.
• I’m not trying that new restaurant. I don’t have any reason to think I
will like it.
A. Short answer.
What supports the fact that we can trust Sally to be honest? Sally’s honesty.
And what does Sally’s honesty demonstrate? That we can trust her to be
honest. Round and round and round it goes, forming (metaphorically) a
circle.
Now, you might wonder: why in the world would anyone construct such
a foolish argument? It’s actually much easier to commit this fallacy than
you might think. Consider a subtler version of this fallacy. Imagine
someone attempts to convince you that God exists by giving you the
following argument:
The arguer is attempting to prove that God exists. But if God wrote the
Bible, then God exists. So, God’s existence is implied in the second
premise, the very claim the arguer is trying to support in the conclusion.
The next example is even subtler:
Straw Person
Imagine that two boxers enter a ring to fight and just before the fight
begins, one of the boxers pulls out a dummy stuffed with straw that’s
dressed just like his opponent. The boxer pummels the dummy to pieces,
and then struts around as if he was victorious and leaves. Clearly no one
would accept this as a legitimate victory.
Now imagine that two politicians are debating in front of a large crowd.
One politician puts forward her argument X. The other politician responds
by subtly switching X to a much-easier-to-reject X* (which is a superficial
caricature of X) and proceeds to offer a compelling argument against X*.
Everyone in the crowd cheers and agrees that X* is a bad argument and
should be rejected. Has the second politician given good reasons to doubt
X? Surely not. He has simply committed a fallacy known as the straw
person.
A straw person fallacy is an argument in which an arguer responds to a
different, superficially similar argument (X*) than the original one
presented (X), though he treats it (X*) as the one presented (X). The
superficially similar—and hence irrelevant—argument is then shown to be
a bad one in some way (either one or all of the premises is/are shown to be
false, or a fallacy is pointed out). It often looks something like this:
Rajesh has turned Nikki’s argument into something very different than it
was originally—something easy to object to.
If the alternate argument is subtle enough, it is easy to be deceived by a
straw person. Politicians often use straw person arguments to rally voters
against their opponents. For instance, during the American invasion of Iraq,
members of the Republican Party in the United States disagreed with
members of the Democratic Party in the United States about the duration of
the American occupation. Democrats accused Republicans of supporting an
unqualified “stay-the-course” campaign, which would lead to great
financial and political losses. Republicans accused Democrats of supporting
a “cut-and-run” campaign, which would leave Iraq in political, social, and
religious chaos. Notice that an extreme position is easy to “knock down”—
so to speak—since no reasonable person really ever goes for extreme
positions. Thus, each side was painting the other’s argument as a
superficially similar, much-easier-to-reject caricature of what they really
were advocating. Interestingly, neither side advocated either of these
extreme positions. Each committed the straw person fallacy against the
other.
Consider a comment American political pundit Rush Limbaugh made on
his radio show on January 15, 2009: “Joe Biden also said that if you don’t
pay your taxes you’re unpatriotic. You’re paying taxes, you’re patriotic. It’s
a patriotic thing to do. Paying increased taxes is a patriotic thing to do, so I
guess we’re to assume under Biden’s terminology that Tim Geithner was
unpatriotic.”
To be sure, Tim Geithner might have been unpatriotic to ignore several
thousand dollars of tax debt, but this is not what Biden said. Biden was
referring to the higher taxes for wealthy citizens proposed by then
presidential hopeful, Barack Obama. The online news site MSNBC reported
Biden’s actual words in an article from September 18, 2008: “Noting that
wealthier Americans would indeed pay more, Biden said ‘It’s time to be
patriotic … time to jump in, time to be part of the deal, time to help get
America out of the rut.’ ” This is quite different from, “If you don’t pay
your taxes you’re unpatriotic.” Limbaugh gets it right the second time:
“Paying increased taxes is a patriotic thing to do.” But then he conflates
paying taxes with paying increased taxes: “I guess we’re to assume under
Biden’s terminology that Tim Geithner was unpatriotic.” Geithner didn’t
pay at all, and Biden didn’t comment on whether this was unpatriotic
(though Biden supported Geithner’s appointment as US Secretary of the
Treasury). Therefore, Limbaugh has set up a straw person against Biden.
Here are more examples of the straw person fallacy:
Red Herring
Google Chewbacca Defense and you’ll be directed to “Chef Aid,” a classic
South Park episode with its cartoon Johnnie Cochran’s “Chewbacca
Defense,” a satire of attorney Cochran’s closing arguments in the O. J.
Simpson case. In the episode, Alanis Morissette comes out with a hit song
“Stinky Britches,” which, it turns out, Chef had written some twenty years
ago. Chef produces a tape where he’s performing the song, and takes the
record company to court, asking only that he be credited for writing the hit.
The record company executives then hire Cochran. In his defense of the
record company, Cochran shows the jury a picture of Chewbacca and
claims that, because Chewbacca is from Kashyyyk and lives on Endor with
the Ewoks, “It does not make sense.” Cochran continues: “Why would a
Wookie, an eight-foot-tall Wookie, want to live on Endor with a bunch of
two-foot tall Ewoks? That does not make sense … If Chewbacca lives on
Endor, you must acquit! The defense rests.”
We laugh at Cochran’s defense because it has absolutely nothing to do
with the actual case. It is satire because it parallels Cochran’s real defense
regarding a bloody glove found at the scene of Simpson’s ex-wife’s murder.
The glove didn’t fit O. J., so Cochran concluded that the jury must acquit
him … ignoring many other relevant pieces of information. The glove and
the Chewbacca Defense are examples of the red herring fallacy, which gets
its name from a hunting dog exercise in which ancient hunters, trying to
discern the best trail hunters, would use strong-smelling red herring fishes
in an attempt to throw dogs off the trail of a scent. In a red herring fallacy,
someone uses claims and arguments that have nothing to do with the issue
at hand in order to get someone to draw a conclusion that they believe to be
true. So, the claims and arguments are the “red herrings” they use to throw
you off the “trail” of reasoning that would lead to another, probably more
appropriate, conclusion altogether.
A common example of a red herring happens almost daily in our more
intimate relationships when someone A points out a problem or issue with
someone B and B responds to A by pointing out some problem or issue that
A has. Consider this exchange:
Mary: Bob, you’re drinking too much and I think you should stop because it’s affecting
our relationship.
Bob: Oh yeah. Let’s talk about some of your problems. You spend too much money!
Notice that Bob has used the “Let’s talk about you” move as a red herring to
avoid Mary’s concerns, and her argument altogether.
Here are more examples of the red herring fallacy:
• “I know that it’s been years since our employees got a raise, but we
work really hard to provide the best service to our customers.”
• Mario: That nuclear plant they’re putting in may cause cancer rates to
rise in our area.
Slippery Slope
The slippery slope fallacy occurs when one inappropriately concludes that
some further chain of events, ideas, or beliefs will follow from some initial
event, idea, or belief and, thus, we should reject the initial event, idea, or
belief. It is as if there is an unavoidable slippery slope that one is on, and
there is no way to avoid sliding down it. Consider this line of reasoning
which we often hear:
If we allow a show like Nudity and Drugs to continue on the air, then
it will corrupt my kid, then it will corrupt your kid, then it will
corrupt all of our kids, then shows like this one will crop up all over
the TV, then more and more kids will be corrupted, then all of TV
will be corrupted, then the corrupt TV producers will corrupt other
areas of our life, etc., etc., etc. So, we must take Nudity and Drugs off
the air; otherwise, it will lead to all of these other corruptions!
We can see the slippery slope. It does not follow that the corrupt TV show
will corrupt other areas of our life. Suddenly, we’re at the bottom of a bad
slope! What happened?
In a famous article from 1992 in The Hastings Center Report titled
“When Self-Determination Runs Amok,” bioethicist Daniel Callahan argues
against euthanasia by making this claim: “There is, in short, no reasonable
or logical stopping point once the turn has been made down the road to
euthanasia, which could soon turn into a convenient and commodious
expressway.” Here, in absence of empirical evidence causally connecting
euthanasia with these consequences, Callahan has committed the slippery
slope fallacy here—it seems that there could be many reasonable and
logical stopping points on the road to euthanasia.
Here are three additional examples:
• “Rated M for Mature video games should be banned. They lead one
to have violent thoughts, which lead to violent intentions, which lead
to violent actions.”
• “Look, if we pass laws against possessing certain weapons, then it
won’t be long before we pass laws on all weapons, and then other
rights will be restricted, and pretty soon we’re living in a Police
State! Let us have our weapons!”
• “I would just stay away from sugar altogether. Once you start, you
can’t stop … and pretty soon you’re giving yourself insulin shots.”
False Dilemma
A false dilemma (also called the either/or fallacy) is the fallacy of
concluding something based upon premises that include only two options,
when, in fact, there are three or more options. We are often inclined to an
all or nothing/black or white approach to certain questions, and this usually
is reflective of a false dilemma in our thinking. In many cities and counties
in the United States, for example, they will ban altogether Christmas trees,
Nativity scenes (showing the baby Jesus in the manger), or the Ten
Commandments in front of public buildings like courthouses because
people reason:
But what about a third option? Could the public building include a few
religious traditions, instead of excluding all of them? Maybe checking to
see which religions in the city have the most adherents and putting up
physical symbols to represent only those religions? Or perhaps, the city
need not do anything at all; it could simply let adherents of religious
traditions set up a small token to their tradition themselves. Also, why think
people have to be “fair” all the time? What if someone were offended? And
why think exclusion is always unfair?
Here’s another common example. If you’re not an auto mechanic, you
may go out to your car one day, try to start it, realize it won’t start, and
think: “Ok, either it’s the starter that’s the problem or it’s the battery that’s
the problem.” You check your lights and your radio—they both work, and
so you immediately infer: “Well, it’s not the battery that’s the problem. So,
it must be the starter.” In argument form, it looks like this:
1. Either it’s the starter that’s the problem or it’s the battery that’s the
problem.
2. It’s not the battery that’s the problem.
3. Therefore, it’s the starter that’s the problem.
Anaxagoras and the Peripatetics held that the parts of a body are in
every respect similar to the whole; that flesh is formed of minute
fleshes, blood of minute drops of blood, earth of minute earths, gold,
water, of minute particles of gold and water. This doctrine (with other
similar ones) was called in later times homoiomereia, that is, the
“likeness of parts (to the whole).”
Exercises
Real-Life Examples
*****
*****
“Wheat People vs. Rice People: Why Are Some Cultures More
Individualistic Than Others?”
T. M. Luhrmann
New York Times, December 3, 2014,
http://www.nytimes.com/2014/12/04/opinion/why-are-some-cultures-more-
individualistic-than-others.html?smid=fb-share&_r=1
…
In May, the journal Science published a study, led by a young University of
Virginia psychologist, Thomas Talhelm, that ascribed these different
orientations to the social worlds created by wheat farming and rice farming.
Rice is a finicky crop. Because rice paddies need standing water, they
require complex irrigation systems that have to be built and drained each
year. One farmer’s water use affects his neighbor’s yield. A community of
rice farmers needs to work together in tightly integrated ways.
Not wheat farmers. Wheat needs only rainfall, not irrigation. To plant and
harvest it takes half as much work as rice does, and substantially less
coordination and cooperation. And historically, Europeans have been wheat
farmers and Asians have grown rice.
The authors of the study in Science argue that over thousands of years, rice-
and wheat-growing societies developed distinctive cultures: “You do not
need to farm rice yourself to inherit rice culture.”
…
I write this from Silicon Valley, where there is little rice. The local wisdom
is that all you need is a garage, a good idea and energy, and you can found a
company that will change the world. The bold visions presented by
entrepreneurs are breathtaking in their optimism, but they hold little space
for elders, for long-standing institutions, and for the deep roots of
community and interconnection.
…
Wheat doesn’t grow everywhere. Start-ups won’t solve all our problems. A
lone cowboy isn’t much good in the aftermath of a Hurricane Katrina. As
we enter a season in which the values of do-it-yourself individualism are
likely to dominate our Congress, it is worth remembering that this way of
thinking might just be the product of the way our forefathers grew their
food and not a fundamental truth about the way that all humans flourish.
1 Quoted in Douglas Walton, Ad Hominem Arguments (Tuscaloosa: University of Alabama Press,
1998), p. xi.
part four
Application
In
Chapters 11 and
12, you will have additional opportunities to apply the critical thinking
skills you’ve learned in this book to real arguments. In
Chapter 11, we introduce the problem of fake news, misinformation,
and disinformation, and we discuss some closely related concepts,
like satire and propaganda. In
Chapter 12, we discuss the complicated problem of conspiracy
theories. Bad conspiracy theories tend to fall apart, and so there’s
little need to worry about them. Good conspiracy theories are “self-
insulating,” in the rational sense, and thus tend to repel traditional
strategies for evaluating them. We will explore some of the best
available strategies for recognizing and avoiding conspiracy theories.
11
Thinking critically about fake news
Here, we discuss a relatively new problem in the history of critical thinking: fake news.
We explain what fake news is, identifying different types and distinguishing them from
real news. We offer some tips and strategies for identifying and avoiding fake news,
and then compare fake news to some older phenomena, such as parody and
propaganda.
Currency
The more recent the information, the more reliable it often is. Ask yourself:
When was the information published? Has the information been revised or
updated?
Scientific conclusions change quickly sometimes, especially in fields like
medicine and epidemiology. So, to be sure you have the best information,
you want to go to the most recent sources. Exceptions may include topics in
history or classics. If you’re researching “Boyle’s Law” in chemistry or “the
Battle of Chickamauga,” older sources may be better than contemporary
sources. Because of this, “timeliness” might be a better way to think of
“currency.” But even in history and classics, more recent interpretations of
concepts, claims, or events may be updated in relevant ways. For example,
historical claims about “oriental” people (instead of Asian) or
“Mohammedans” (instead of Muslims) reflect cultural assumptions about
those people that are now largely believed to be false and that should not be
perpetuated today.
Relevance
The information should bear directly on your topic. Ask yourself: Does this
information relate to my topic or answer my question? How important is it
to my point?
The idea here is to avoid fallacies like ad populum, red herring, and ad
hominem (see
Chapter 10 for more on these fallacies). If your research question is whether
“school vouchers” are effective, an article on how rarely school districts
offer vouchers for parents to send children to the school of their choice
would not be relevant to answering the question. Perhaps school districts do
not make such decisions based on effectiveness, or perhaps the decision is
out of their hands. Whether school districts offer vouchers is a red herring
for the question of whether voucher systems are effective.
Authority
Accuracy
Accurate information is, obviously, more reliable than information that is
inaccurate. A better way of putting the point is: Information supported by
evidence is more reliable than information that is not. Ask yourself: Where
does this information come from? Is the information supported by
evidence?
Sometimes the information you’re reading makes claims you can check
for yourself alongside those you can’t. If the information is not reliable on
those issues you can check for yourself, you shouldn’t trust it on those
issues you can’t. Further, reputable information will usually include the
sources of its claims, whether they came from empirical research, whether
they were made by experts in the field, or whether the person writing it is
the relevant expert. If an article does not do this, and especially if there is
no author or sources listed, the information is less likely to be reliable. Even
sources of information that are known for reliability—such as the Centers
for Disease Control and public health agencies—will sometimes distribute
information that doesn’t include an author or the material’s sources. This is
bad practice, and it makes it hard for non-experts to trust them.
Purpose
The key to this advice is: Use fact-checking websites. The easiest way to
get started evaluating any piece of information is to see what other people
say about it. You don’t have to believe the results, but you will very likely
at least get a perspective on why the bit of information you’re evaluating is
likely true or false. Instead of having to do all the legwork yourself, you’re
acknowledging that other people have probably done most of the work for
you, and all you have to do is sift through it for the most plausible analysis.
Later in this chapter, we provide a list of “Websites That Can Help,” and
many of these are fact-checking sites.
Move 2: Go Upstream
If fact-checking leaves you unsatisfied, look for any sources cited in the
piece of information itself. If there’s nothing mentioned, that’s a red flag!
Most national news is covered by a handful of highly seasoned reporters or
scientists whose work has been either syndicated or summarized by other
sources. There should be a link to the original information that you can go
read yourself, and then evaluate that version of whatever claims are being
made. If there is no original, or at least corroborating, story, it’s good advice
to suspend judgment until more evidence comes in. Caulfield’s book (don’t
forget, it’s free) gives excellent tips on how to go upstream.
This move begins from the assumption that you probably won’t be able to
tell whether a source is reliable just by evaluating the source. Mis- or
disinformation that’s done well will be indistinguishable from authentic
information sites. Computer and design technology puts the ability to make
professional-looking web content straight into the hands of deceivers.
Therefore, in order to evaluate whether a source of information is reliable,
you will need to see what other sources of information say about that
source. If multiple sources say the source you’re looking at is reliable, that’s
a mark in its favor.
In some cases, all this lateral checking and double-checking leads you in a
circle. One website confirms a story by citing another story that cites the
original story. What do you do then? Start over. Change your search terms.
Pay attention to when the story supposedly happened and look at the date of
publication. Remember, too, that some of the doctors who are part of the
Front Line COVID-19 Critical Care Alliance are real doctors at reputable
places, even though the website is a political tool that promotes bad science
(this is a fun one to investigate on your own; have fun). So, you may need
to do some extra creative searches to understand whether some websites are
reliable.
This is the quickest way to find the most common perspectives on an issue.
Make a list. Then start asking questions.
You will never get the whole story from one source of information—not
even Wikipedia. Look at sources from different political perspectives, from
religious and non-religious perspectives. Again, the key is to get a sense of
the different angles that are possible so you can then go check them out.
Know your sources, too. International news outlets are (typically) going to
have more facts and less editorial/opinion pieces. Blog posts are almost
always strongly slanted to one position or another.
Compare the sources against one another and see what they agree on. The
truth is sometimes caught in the middle.
Ask some folks who work in the area you’re asking about, or in areas
closely related. Get a sense of how the “insiders” think and talk about the
issues. Maybe the public conversation has just blown up an issue into a
firestorm that’s really very mundane.
Most of the issues you’re researching are ones you care about, and once
your emotions are invested, you’re in dangerous territory. If the news make
you angry, afraid, or anxious, take some time away from it before diving
into research. Your emotions have primed you to find one perspective on
the issue more correct than any other. And someone may have designed a
story to invoke just that response in you, so be wary.
You’re getting good at this logic stuff, but that doesn’t mean you should
stop double-checking yourself. While they (whomever they are) might be
wrong, it’s also possible that you are wrong. Keep digging until you have
the evidence you need for a strong conclusion.
There is no reason you have to come down on one side or another on any
issue. If you don’t have enough evidence, or if the evidence you have
doesn’t point clearly one way or the other, suspend judgment. Hold off.
Wait until the rest of the evidence is in. You may have to make some
decisions about what to do in the meantime (as when pandemics spread
quickly, and the experts are still trying to figure out all the details), but you
can still hold off one what you believe. Giving yourself the freedom to
suspend your belief also saves you a lot of stress.
FactCheck: https://www.factcheck.org/
Fact Check Review @ RealClearPolitics:
https://www.realclearpolitics.com/fact_check_review/
Lead Stories: https://leadstories.com/
Media Bias Chart: adfontesmedia.com
Media Bias / Fact Check: https://mediabiasfactcheck.com/
Open Secrets: https://www.opensecrets.org/
PolitiFact: https://www.politifact.com/
Poynter Institute: Fact Checker News: https://www.poynter.org/media-
news/fact-checking/
Poynter Institute IFCN: International Fact Checking Network:
https://ifcncodeofprinciples.poynter.org/signatories (a list of fact-check
agencies that have publicly committed to ethical fact-checking in
journalism)
RationalWiki: https://rationalwiki.org/wiki/Main_Page
Snopes: https://www.snopes.com/
Truth or Fiction?: https://www.truthorfiction.com/
Washington Post Fact Checker:
https://www.washingtonpost.com/news/fact-checker/
Parody
Parody is very similar to satire (which we explained earlier in this chapter).
Like satire, most effective parody pieces often mimic the look and feel of
whatever media they are parodying (e.g., if a newspaper article is being
parodied, the finished piece will look almost exactly like a ‘real’ newspaper
article). Like satire, parody might also incorporate the use of various
statistics and quotations from external sources in order to establish a
baseline between the author and the audience. Where the two differ,
however, is that parody relies extensively on non-factual information. In
fact, parody pieces walk a fine line between plausibility and absurdity. And
the true difficulty comes for parody authors to make their absurdities
plausible, thus enabling both the author and the audience to share a chuckle.
That’s the rub, as Hamlet might say. That is where the entertainment and
fun factor of parody lies. If the author is too subtle in their parody, then the
audience misses out on the shared gag and, thus, the comedy is ruined.
Thus, parody can be considered “fake news” in two ways. First, like satire,
it mimics real news broadcasts but is, in fact, not a news broadcast; second,
it can mislead the audience into believing outlandish claims if not presented
properly.
Alternative Facts
In January of 2017, the White House Press Secretary, Sean Spicer, said that
the audience for the latest Presidential inauguration was “the largest
audience to ever witness an inauguration, period, both in person and around
the globe.” This was, by all estimates and to the naked eye, not a fact. It
was, in fact, false. However, the president’s counselor Kellyanne Conway
explained that this statement was not inconsistent with facts, it was simply
an “alternative fact”: “You’re saying it’s a falsehood and Sean Spicer, our
press secretary, gave alternative facts to that” (Jaffe 2017). What could she
have possibly meant?
In all likelihood, Conway made up the term on the spot in an unfortunate
attempt to defend Spicer. But afterward, the term was adopted into the
social conversation. The most likely interpretation is that reality is whatever
you want it to be, or perhaps more accurately, whatever people in power say
it is. When journalist and unofficial White House spokesperson Scottie Nell
Hughes was asked whether the US President was free to make up lies about
who won the popular vote, she said, “there’s no such thing anymore
unfortunately as facts” (Fallows 2016). In other words, the president can
make up whatever he wants, and no one can call it “wrong” or “false.” The
whole situation was reminiscent of George Orwell’s 1984, when the
government changed its position on who it was at war with. While they had
been saying they were at war with Eurasia, they were actually at war with
Eastasia, and all media should be changed to reflect that:
Exercises
Example:
3. “Why Yale Law School Left the U.S. News and World
Report Rankings,” Adam Harris
(https://www.theatlantic.com/ideas/archive/2022/12/us-
news-world-report-college-rankings-yale-law/672533/).
4. “Why I Joined, Then Left, the Forward Party,” Joseph
Swartz
(https://www.theatlantic.com/ideas/archive/2022/12/forward
-third-party-andrew-yang/672585/).
(https://www.theatlantic.com/ideas/archive/2022/12/new-
haven-connecticut-gun-violence/672504/).
https://www.theatlantic.com/ideas/archive/2022/12/new
-haven-connecticut-gun-violence/672504/
https://hbr.org/2022/10/the-curse-of-the-strong-u-s-
economy
https://www.washingtonpost.com/world/2022/12/31/ukr
aine-putin-zelensky-missiles/
b. “Russia Fires 20 Cruise Missiles at Ukraine on New
Year’s Eve, At Least 1 Dead, Dozens Injured,” Caitlin
McFall
https://www.foxnews.com/world/russia-fires-20-cruise-
missiles-ukraine-new-years-eve-least-1-dead-dozens-
injured
https://www.latimes.com/entertainment-
arts/tv/story/2022-03-02/ukraine-russia-war-racism-
media-middle-east
https://www.foxnews.com/media/nyt-journalist-media-
russia-ukraine-racial-biases
https://www.theatlantic.com/ideas/archive/2022/12/elon
-musk-twitter-finances-debt-tesla-stock/672555/
https://news.columbia.edu/news/what-critical-race-
theory-and-why-everyone-talking-about-it-0
https://www.foxnews.com/us/what-is-critical-race-
theory
Answer:
1. Combination Covid and Flu Tests proves that they are the
same virus.
2. Masking is ineffective against the spread of Covid-19 or
any of its variants.
3. Covid-19 vaccines are gene therapy.
4. The EU is imposing a “personal carbon credit” system.
5. Hooters is shutting down and rebranding due to changes in
Millennial tastes.
6. The moon landing was staged.
7. Covid numbers were inflated due to medical professionals
counting flu cases among Covid cases.
8. Benedict XVI was forced to retire.
9. Romanian authorities were only able to arrest Andrew Tate
due to the Twitter feud he had with Greta Thunberg.
10. The 2020 election was rigged, and votes were counted
illegally.
References
Caulfield, Mike (2017). Web Literacy for Student Fact-Checkers.
https://open.umn.edu/opentextbooks/textbooks/454.
Fallows, James (2016). “There’s No Such Thing Anymore, Unfortunately,
as Facts’.” The Atlantic. November 30.
https://www.theatlantic.com/notes/2016/11/theres-no-such-thing-any-
more-unfortunately-as-facts/509276/.
Gelfert, Axel (2018). “ ‘Fake News’: A Definition,” Informal Logic (38)1:
84–117.
Jaffe, Alexandra (2017). “Kellyanne Conway: WH Spokesman Gave
‘Alternative Facts’ on Inauguration Crowd.” NBC News, January 22.
https://www.nbcnews.com/meet-the-press/wh-spokesman-gave-
alternative-facts-inauguration-crowd-n710466.
LaFraniere (2021). “Pfizer Says Its Booster Offers Strong Protection against
Omicron,” New York Times, December 08.
https://www.nytimes.com/2021/12/08/health/pfizer-booster-
omicron.html.
ManJoo, Farhad (2008). True Enough: Learning to Live in a Post-Fact
Society (Malden, MA: Wiley).
McBrayer, Justin (2021). Beyond Fake News: Finding Truth in a World of
Misinformation (London: Routledge).
Nichols, T. (2017). The Death of Expertise: The Campaign Against
Established Knowledge and Why it Matters (Oxford: Oxford University
Press).
Orwell, George (1948). 1984, London: Secker and Warburg.
This chapter has benefitted greatly from Justin McBrayer’s book, Beyond Fake News: Finding
the Truth in a World of Misinformation (London: Routledge, 2021).
1 It should be noted before we dive too deeply into this topic, however, that some photo
“manipulation” is allowable even by the standards of professional journalism. Journalists are
typically only allowed to make “presentational changes” to an image (e.g., adjust the tone and/or
contrast of an image to make it easier to see), though. They are typically forbidden from making
any additions to the image.
12
Thinking critically about conspiracy
theories
In this final chapter, we apply many of the concepts and discussion from the rest of the
book to conspiracy theories. We explain what a conspiracy theory is, offer some tools
for identifying them, and then discuss ways of responding to conspiracy theories, both
as individuals and as a society.
Please remember that Anthony Fauci and Francis Collins are not just
two scientists among hundreds of thousands. As the NIH [National
Institutes of Health] site says, it “invests about $41.7 billion annually
in medical research for the American people.” With that kind of
spending power, you can wield a great deal of influence, even to the
point of crushing dissent, however rooted in serious science the target
might be. It might be enough power and influence to achieve the
seemingly impossible, such as conducting a despotic experiment
without precedent, under the cover of virus control, in overturning
law, tradition, rights, and liberties hard won from hundreds of years
of human experience.
(https://brownstone.org/articles/faucis-war-on-science-the-smoking-gun/)
Note that there’s no argument here. There is only speculation that, because
the government funds most scientific research, they could control the results
of that research. And the implication (unstated) is that they do control that
research. Note also the size of this conspiracy. It implicates not just those in
the highest positions at the National Institutes of Health (NIH), but every
working scientist on grants funded by the NIH. We will say more about how
the size of a conspiracy affects its plausibility later in this chapter.
Less outlandish versions of the conspiracy theory claim that Covid-19 is
a ploy to make millions of dollars from vaccine sales—for the
pharmaceutical companies that make them and for the politicians who have
substantial investments in pharmaceutical companies. Some even claim that
the virus was created in a lab and released into society precisely for this
reason. In these versions of the conspiracy, mainstream experts are simply
pawns in the bigger plot. Non-mainstream experts (such as the Front Line
Covid-19 Critical Care Alliance, FLCCC) who try to reveal the “truth”
about the conspiracy or promote alternative, non-vaccine-related treatments
are ridiculed by the mainstream and prevented by social media platforms
from having a voice. This further strengthens the Covid-19 conspiracy
theory that the government is trying to cover up their plan.
We might say, then, that the problem is not that conspiracy theorists don’t
trust experts—they often point to people that would pass as experts, that is,
people with credentials and experiences related to the debate. Instead, we
could say that conspiracy theorists use experts poorly. For example,
members of the FLCCC are real doctors who have certifications from real
medical societies, who practice in real medical institutions. They certainly
seem relevant. But the FLCCC’s main goal is to promote the drug
ivermectin as an alternative to vaccines to reduce the symptoms of Covid-
19. Yet, a majority of mainstream experts claim that the evidential support
the FLCCC offers for ivermectin is paltry and insubstantial.
But wait. This raises an important question: Which group of experts
should we non-experts believe, those from the FLCCC or those from
mainstream health sciences? This disagreement between the FLCCC
scientists and mainstream scientists doesn’t seem like a disagreement
related to conspiracy theories. It seems like a typical scientific dispute, such
as the debate over evolution or the question of whether drinking wine
lowers your risk of heart disease. In all these cases, there are just “experts
on both sides.” But there is a difference.
Whereas typical debates over scientific claims involve competing experts
pointing to different pieces of evidence and methodology, conspiracy
theorists like Tucker and the FLCCC claim not just that they are right on the
basis of the evidence, but that a powerful group is conspiring against them.
The president of the FLCCC, Pierre Kory, writes that the government is
suppressing research on generic drugs in order to give preference to
vaccines:
(https://thefederalist.com/2021/12/16/studies-proving-generic-drugs-can-
fight-covid-are-being-suppressed/ )
Here we see clearly that the conspiracy theory appeals to values other than
truth (risk to public safety vs. the desire for profits) and the ad hominem
fallacy (the fact that companies like Monsanto have a profit motive settles
the issue of whether GMOs are safe).
The same kind of simple reasoning is involved in all kinds of conspiracy
theories, from the moon landing to aliens at Roswell, to the governmental
plot to destroy the Twin Towers. The idea that the government was more
concerned about public opinion than confident that it could put people on
the moon was something the public at large could understand. The
harrowing scientific uncertainty and incomprehensible mathematical
complexity that it took to make the actual event happen was not.
Cascade logic is another key feature of conspiracy theories. The process
starts with raising suspicion based on common sense. If one piece of
common sense holds, then another closely related suspicion is raised. The
collection of suspicion, reinforced by multiple voices, makes a very
implausible claim seem likely to be true.
Cascade logic often occurs when we have very little information on a
topic but are pressured to make a decision in very little time about what to
believe based only on a conspiracy theorist’s reasoning. The pressure to
make a decision makes the jump from suspicion to suspicion easier to
accept. The event is framed for us non-experts by the conspiracy theorist.
Let’s take the SARS-CoV-2 virus as an example. Conspiracy theories are
usually at least possibly true, and so we start from that reasonable
assumption (“Think about how many politicians have stock in
pharmaceutical companies, and how much money pharmaceutical
companies could make off of a worldwide vaccine mandate”). We are then
given a simple, “common-sense” explanation in terms of a conspiracy (“Dr.
Fauci was working in Wuhan, China. Don’t you find that a little suspicious
that the virus originated there and that Fauci is the one advising the
government?”). If that explanation makes sense to us (or if we are driven
primarily by psychological needs other than truth—including the
conspiracy theorist’s reputation), we may concede (even if only weakly) the
conspiracy theorist’s point. We might even construct our own, weaker
version of it because, of course, we are critical thinkers, too (“My friend
wore a mask and got Covid anyway. I bet masks are just a test. If the
government can get us to trust them about masks, they can enact all kinds of
social control measures. If I refuse to wear a mask, I’m showing how
shrewd a thinker I am and standing with free people against the
government”). All this is bad enough, but here’s where it gets complicated.
First, there isn’t just one conspiracy theorist saying this; there are dozens.
And some of them are people we trust. So, without any additional
information, without getting any more evidence, the weak evidence for the
conspiracy suddenly seems much stronger. Weak evidence bolstered by
reputation cascades on top of (is added to) previous weak evidence, and
suddenly the conspiracy theory doesn’t just look possible; it starts to look
plausible.
Second, even if we decide to dig in and look for conflicting evidence, any
new information we look for or come across must now overcome our initial
suspicion. To ease our concerns, we would have to find reasons for thinking
that our plausible-sounding theory doesn’t fit with the evidence. But, as you
might imagine, the information put out by official, mainstream experts—the
Centers for Disease Control, the World Health Organization, independent
doctors, Dr. Fauci, and so on—is not designed to address these concerns.
That information is focused on issues the mainstream experts think the
public needs to know about. In the case of Covid-19, it was the nature of the
virus, how dangerous it is, possible preventive measures, and the race for a
vaccine. Since the mainstream experts don’t address our suspicions, we
become even more suspicious because maybe they don’t want to
accidentally admit they’re lying. The seeming evidence against the
mainstream experts mounts: “All these experts act like nothing is
suspicious, and they’re treating us as if we can’t think for ourselves!” By
the time that disgruntled doctors join the conspiracy theorists (see heroic
lone wolf below), the cascade effect moves the needle on the conspiracy
theory from merely plausible to slightly more plausible.
Conspiracy theorists then assume that the conspirators have an
immense amount of power to keep their plot secret. Both conspiracy
theorists and those who oppose them agree that the only way large-scale
conspiracies can be executed is with large-scale power. But, in order to pull
off some of what’s claimed, the power required is truly not credible.
Consider how many people would have to buy in or be bought off to
conceal a conspiracy as large as the United States destroying the Twin
Towers or the Covid-19 pandemic. It’s not just one or two key people, but
hundreds of thousands, in some cases—millions in others. Recall Jeffrey
Tucker’s claim about the NIH earlier. That’s grandiose enough—the whole
US scientific community is under the sway of the NIH. What he didn’t
mention was that the conspiracy he’s concerned about would require
collusion among the governments and health organizations of every
industrialized country in the world, from England to Japan, Argentina to
Australia. Such a conspiracy, if it were real, would be so large and
monumental that there’s nothing any of us could do about it anyway. The
more plausible story is that the world is complicated, and whatever fraud,
deception, or error is taking place during the Covid-19 pandemic (for there
surely is some), it’s taking place on a much smaller scale.
So far, we have seen that conspiracy theorists identify a psychological
need; give us a simple, common-sense explanation; put us in a defensive
position that is not addressed by mainstream experts; and then exaggerate
the power of the players. A final key element is the emergence of a
minority of heroic nay-saying experts, lone wolves who stand on the side
of the conspiracy theorists. For any debate, as you might guess, there are
experts on both sides. And in the case of conspiracy theories, the number of
people on their side is often small. But it is precisely their small number
that is taken to be evidence of a conspiracy. Only a few would have the
courage to speak out against such powerful enemies. And when these lone
wolf experts join the conspiracy theorists, their small numbers make their
testimony seem stronger than it is. If experts are on board, maybe there’s
something to the conspiracy theory after all.
The problem is that the experts who support conspiracy theories rarely
represent the current state of evidence on the issue. In the case of the
FLCCC, their evidence that ivermectin works against Covid-19 is scant
and, according to other experts, poorly conducted. As of this textbook, two
of their primary research articles aiming to demonstrate the effectiveness of
ivermectin and other controversial treatments have either been retracted or
required to submit corrections indicating lack of clinical efficacy (“Review
of the Emerging Evidence Demonstrating the Efficacy of Ivermectin in the
Prophylaxis and Treatment of Covid-19,” 2021, correction issued with
disclaimer; Clinical and Scientific Rationale for the “MATH+” Hospital
Treatment Protocol for Covid-19,” 2021, retracted).
Some might argue that this is further evidence of the conspiracy. The
problem for novices is that the experts in the field—namely, those who have
raised concerns about the accuracy of the research—are the only people in a
position to know whether the paper was retracted for legitimate scientific
reasons. In most cases, the question, “Which set of experts should we
believe?” is easy to answer. When a majority of experts in a domain agree
on something, that reflects the current state of understanding in the field.
The minority experts are usually tinkering, pressing, challenging the status
quo. They’re often wrong. But sometimes they’re right. And only time
(after the majority of experts have reviewed their work) can tell whether
they were right. In those cases, novices can do no better than accept the
state of the field.
In a few cases, the question is slightly different. For example, when an
issue is time-sensitive, like the Covid-19 pandemic, the problems and
questions are so new that one of two things can happen. (1) There is no
consensus because there is no agreed on interpretation of the events. This
was true early in the Covid-19 pandemic when the virus was spreading
quickly but experts knew very little about it and how best to avoid it. Time-
sensitive cases are playgrounds for conspiracy theorists because they have
an opportunity to plant the seeds of their non-truth-based, relatively simple
explanation. By the time the experts have agreed on the basic facts,
conspiracy theories are already entrenched in some people’s beliefs.
(2) The consensus is based on political expediency rather than
independent review. Experts are supposed to speak on topics based on their
understanding of the field and on their own research. This is why the more
experts who agree on something, the more likely that thing is to be true. But
experts are human. So, for any topic where the stakes are high, some
experts will agree with other experts simply in order to display a “unified
front” to the public. For example, while we now have overwhelming
evidence that climate change is happening and is accelerated by humans, it
wasn’t always the case. However, the stakes were always super high. If
greenhouse gas emissions were causing global warming, then it was
imperative to lower them as quickly as possible, even before all the
evidence was in. Further, lowering greenhouse gas emissions wouldn’t be
bad for the environment, so the only expense would fall to companies who
produce lots of emissions. This possibility was fertile ground for conspiracy
theorists. Many were concerned that scientists were simply supporting the
mainstream view of global warming for the sake of inciting political action
or securing positions of authority in universities rather than because of the
best science. Whether this was true or not is unclear. Nevertheless, given
that such a decision would be made for a reason other than truth and
involves a kind of conspiracy (scientists secretly agreeing among
themselves to say the earth is warming), if it were true, conspiracy theorists
would be right to point out a problem for expert consensus.
This story hits almost all the key elements of a conspiracy theory: the
psychological need of safety, simplicity, exaggerated power, a heroic lone
wolf. If we didn’t know it was true, it might sound like fantastical political
fiction. But if real conspiracies have many of the marks of conspiracy
theories, what are we outsiders to think? Is there any way to avoid the traps
that conspiracy theorists set?
The first step in responding to the problem of conspiracy theories is to
give ourselves a little charity. Some conspiracies are real, and some
conspiracy theories are especially compelling—especially if you live in a
community where one or more is common among people you trust. Plus,
since we know we are susceptible to fallacies, we shouldn’t be surprised
when we find ourselves believing things for bad reasons.
Once we’ve admitted this, we can (a) approach the question more
humbly, and (b) prepare ourselves to do the hard work of gathering and
weighing evidence. Here are five strategies to help you determine whether
an explanation is genuine or a conspiracy theory. These strategies aren’t
foolproof. Every conspiracy theory is a little different, and in some cases,
evidence is difficult to find or understand (such as the evidence surrounding
climate change). Nevertheless, the more careful you are, the more thorough
you are, and the more even-handed with the evidence you are, the better
your chances of avoiding believing something for bad reasons.
Use the Same Standards for Your Beliefs That You Apply to
Others’
When we believe for bad reasons, we often have different standards for
those beliefs than we do for others who hold contrary beliefs. We point out
evidence that confirms our beliefs but look for evidence that disconfirms
others’ (confirmation bias). We consider the possibility that we are right to
be more significant than the possibility that we are wrong (anchoring). We
attribute others’ beliefs to bad motives or conflicts of interest rather than
bad evidence (ad hominem). We suggest that the fact that there’s little
evidence that we’re wrong counts as evidence that we’re right (argument
from ignorance).
Believing responsibly, on the other hand, requires that we apply the same
standards of good evidence to all beliefs, no matter who holds them. If a
respected epidemiologist says that Covid-19 is life-threatening, we cannot
simply reply that she is only saying that because she is a member of the
medical establishment and then point to a different epidemiologist as
evidence to the contrary. Use the same standard for your own beliefs that
you hold others to.
Exercises
References
Gorman, Jack, and Sarah Gorman (2017). Denying to the Grave: Why We
Ignore the Science That Will Save Us (New York: Oxford University
Press).
Sunstein, Cass R., and Adrian Vermeule (2008). Conspiracy Theories
(January 15). Harvard Public Law Working Paper No. 08–03, University
of Chicago, Public Law Working Paper No. 199, U of Chicago Law &
Economics, Olin Working Paper No. 387.
http://dx.doi.org/10.2139/ssrn.1084585.
Answers to select “Getting familiar
with…” exercises
1. Question
3. Command
5. Emotive Iteration
7. Emotive Iteration
9. Question (This one depends on context. Literally, it is a question. But,
depending on who is saying it—and the tone with which it is said—it
could be a command.)
11. Prescriptive Claim
13. Descriptive Claim
15. Question (If asked rhetorically, in an exasperated tone, this is an
emotive iteration.)
17. Prescriptive Claim
19. Question
1. Conditional
3. Conjunction
5. Simple
7. Bi-conditional
9. Disjunction
11. Conditional
13. Disjunction
15. Negation
b.
1. Conditional
3. Bi-conditional
5. Conjunction
1. Some
3. None
5. Some
7. All
9. Some
11. All
13. None
15. All
17. Some
19. All
1. Given the earth’s speed and the time it takes to complete one rotation
around the sun, then assuming the sun is in the center of the orbit, we
can calculate the distance between the earth and the sun (indirect).
Established scientists like Neil DeGrasse Tyson say that the Sun is
about 92,960,000 miles from Earth (Indirect).
3. The people who program calculators believe that 2 + 2 = 4 (Indirect).
I understand the relationship between 2 and 4 such that it is clear to
me that 2 + 2 = 4 (Direct).
5. The only photo from the moon has shadows in two different
directions, which wouldn’t happen if the sun were the light source
illuminating the picture; it must have been taken in a studio
(Indirect). Scientists working for NASA have testified that all moon
landing attempts were failures (Indirect).
7. I can tell the difference between different colors, and these words are
black (Direct). Books are usually printed with black letters on white
pages (Indirect).
9. I remember from Chemistry class that carbon atoms are larger than
hydrogen atoms (Indirect). A chemist friend of mine says that
hydrogen-1 atoms have a smaller atomic mass than carbon-12 atoms
(Indirect).
11. I have suffered quite a bit (Direct). I have heard about the suffering of
many people all over the world (Indirect).
13. Textual scholars reject that method as unreliable (Indirect). There are
clear examples where that calculation leads to the wrong conclusion
about the author of a text (Direct—counterexamples demonstrate
directly that the method is unreliable).
15. I look older than my siblings, and people who look older usually are
older (Indirect). My and my siblings’ birth certificates tell me that I
am older than they (Indirect).
17. Simpson had more motive and opportunity than anyone (Indirect).
The murder weapon, style of killing, and time of death indicate that
Simpson is likely the killer (Indirect).
19. Humans and chimpanzees are similar in genetic make-up (Indirect).
Humans have physical features similar to chimpanzees (called
homologies) (Indirect).
b.
1. Argument. Premises: The rug is stained. The rug was not stained last
night. The dog is the only thing that has been in the room since last
night. Conclusion: The dog stained the rug.
3. List
5. Narrative that includes information. We learn not only the order of
events but also details about the events.
7. Argument. Premises: She is 21 years old (legal driving age). She has
no history of accident. She does not drink, which means she is not at
risk for accidents due to alcohol. Conclusion: She is trustworthy to
drive your vehicle.
9. Narrative
11. Narrative
13. Informational statement that contains a narrative. (The key is that the
narrative doesn’t tell us any details of the story; we don’t know any
details of the childhood home, the family lineage, etc. This suggests
that it just meant to inform us about what sort of story he told.)
15. Narrative that contains an argument. Premises: [The victim was shot
—enthymemic premise.] The bellhop had the gun that killed the
victim. The bellhop is the only person with motive. There was no
other evidence about who killed the victim. Conclusion: The bellhop
killed the victim.
17. List that informs (This list is given in response to a question. The list
serves as an answer to the question, so it is an informational
statement.)
19. Argument. Premises: Edwin Hubble discovered evidence of a Big
Bang event. Arno Penzias and Robert discovered more evidence of a
Big Bang event. Conclusion: You should believe the Big Bang theory.
Getting familiar with… identifying arguments.
1. The wind blew against the sea wall for a long time.
3. To succeed at any job well, you must be able to work with others who
act differently than you.
5. Circumstances are unfortunate. We must all do things we do not like
to keep things from getting worse.
7. Evaluate yourself or others in a way that helps you or them see their
performance accurately.
9. I will not run for office if I don’t believe I see what needs to be done
to make people better off than they are and if I don’t have the ability
to do those things.
(It is difficult to make this much more precise because it isn’t clear
what sorts of things could be done, what it would mean for someone
to be “better off,” or what it would mean to have the ability to do
those things. Politicians are slippery.)
11. Please work hard and do your job well.
13. We need to design and create products that no one else has thought
of.
15. We’re trying out some new policies this year that we hope you’ll like.
17. Don’t be afraid to be creative. Experiment with new ways of doing
things.
19. Don’t waste time.
1. Syntactic ambiguity.
a. The lab mice assaulted him.
b. He was assaulted next to the lab mice.
3. Lexical ambiguity: prescription
a. He gave her a medical prescription for a medication that
alleviates pain.
b. He gave her advice about how to cause pain.
5. Syntactic ambiguity.
a. They were discussing cutting down the tree that is growing in
her house.
b. While they were in her house, they were discussing cutting
down the tree.
7. Vague. It is unclear what counts as “unfair” in this context Some
options:
a. The test was more difficult than the student expected.
b. The test did not give all students an equal chance at success.
c. The test did not give students a reasonable chance at success.
9. Vague. It is unclear what counts as “nice” in this context. Some
options:
a. She is friendly.
b. She is morally good.
c. She is charitable.
d. She is pleasant.
11. Syntactic ambiguity (but the syntactic ambiguity trades on a lexical
ambiguity with the word “duck”).
a. He saw the duck that belongs to her.
b. He saw her lower her head quickly.
13. Vague. It is unclear what counts as “terrible” in this context. Some
options:
a. Unsuccessful at his job.
b. Supporting policies that harm citizens.
c. Supporting policies with which the author disagrees.
d. Not communicating well or much with citizens.
15. Vague. It is unclear what counts as “great” in this context. Some
options:
a. Locke was an intriguing writer.
b. Locke defended many claims that turned out to be true.
c. Locke argued for his views very ably.
d. Locke was a well-known philosopher.
e. Locke was a well-respected philosopher.
17. Lexical ambiguity. “Match.”
a. He could not find where the tennis match was being held.
b. He could not find the complement to his sock.
c. He could not find the wooden matchstick.
19. Syntactic ambiguity.
a. My uncle, the priest, got married to my father.
b. My uncle, the priest, performed the marriage ceremony for my
father.
1. c: somewhat unlikely. Given that you have only one past experience
with the dog, and that the dog has a track record of being much
calmer, you are unlikely to be bitten again. This conclusion would be
highly unlikely if we had the premise that dogs that are castrated are
much less likely to be aggressive.
3. b: somewhat likely. There is very little to connect the universe and
watches, but they do share at least one trait that is common to both
(complexity), which makes it slightly more likely that they also share
other traits, such as "having a maker."
5. d: highly unlikely. Given only these premises, we have no idea
whether Rajesh loves Neha. "Loves" is not a transitive relation. The
evidence do not support the conclusion to any degree.
7. d: highly unlikely. The evidence suggests that it is highly likely that
the next bean will be red. Therefore, it is unlikely that the conclusion
is true.
9. b: somewhat likely. Every sporting match is different, so even these
things might increase the likely that the Tigers will win, it is not clear
how much (as many people who bet on such things find out the hard
way).
A.
B.
1.
3.
5.
7.
9.
1.
A: Cats
B: Mammals
3.
A: Voters in the United States
B: People under eighteen years old
5.
A: Mormons
B: People that are rational
7.
A: People
B: People who like it hot
9.
A: Our students
B: People who are rational
11.
A: Shelly and Zoe
B: People who are Hindu
13.
A: Men
B: People who like Lady Gaga
15.
A: Men
B: People who are brown-haired
17.
A: Items in this bin
B: Items that are on sale
19.
A: Dinosaurs
B: Animals that are extinct
1. True
3. False
5. True
7. False
9. False (unless you invoke the existential assumption; then it is true)
11. False
13. True
15. True
17. False
19. False
Getting familiar with… testing categorical arguments with Venn
diagrams.
1. All frigs are pracks. All pracks are dredas. So, all frigs are dredas.
3. A few rock stars are really nice people. Alice Cooper is a rock
star. Hence, Alice Cooper is a really nice person.
5. All CFCs (chlorofluorocarbons) deplete ozone molecules. CFCs
are things that are produced by humans. Therefore, some things
produced by humans deplete ozone molecules.
7. No drugs that can be used as medical treatment should be
outlawed. Marijuana can be used as a medical treatment. Thus,
marijuana should not be outlawed.
A.
1. (C v S)
complex; disjunction
3. F
simple
complex; disjunction
7. ~P
complex; negation
9. H
simple
11. G
simple
13. (F ⊃ (P v S))
complex; conditional
15. ((T v M) v Y)
complex; disjunction
complex; conjunction
complex; conditional
B.
1. (T & D)
3. (P iff G)
7. (~A ⊃ ~R)
If you interpret R as, “There is a real reason to care about morality,” then
translate it as ~R, as we have done: (~A ⊃ ~R). If you interpret R as “There
is no real reason to care about morality,” then translate it as R. We prefer
the former as it allows for a more sophisticated analysis of an argument in
which this claim may play a role.
9. (N & ~S)
This one is tricky because of the word “possible.” There is another type of
deductive logic that deals with “possibility” and “necessity,” called modal
logic. We will not cover modal logic in this book, so we just include
“possibility” as part of the propositional claim.
1. (P v Q)
(P v Q)
T T
T F
F T
F F
3. ((A v B) & A)
((A v B) & A)
T T T
T F T
F T F
F F F
5. ((A ⊃ B) v C)
((A ⊃ B) v C)
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F
(Q ⊃ ((R v P) & S)
T T T T
T T T F
T T F T
T T F F
T F T T
T F T F
T F F T
T F F F
F T T T
F T T F
F T F T
F T F F
F F T T
F F T F
F F F T
F F F F
1. (P v Q)
(P v Q)
T T T
T T F
F T T
F F F
3. ((A v B) & A)
((A v B) & A)
T T T T T
T T F T T
F T T F F
F F F F F
5. ((A ⊃ B) v C)
((A ⊃ B) v C)
T T T T T
T T T T F
T F F T T
T F F F F
F T T T T
F T T T F
F T F T T
F T F T F
(A & ~ B)
T F F T
T T T F
F F F T ←
F F T F
3. ~ (A ⊃ ~ B)
~ (A ⊃ ~ B)
T T F F T
F T T T F
F F T F T ←
F F T T F
5. ~ (~ W & ~ P)
~ (~ W & ~ P)
T F T F F T
T F T F T F
T T F F F T
F T F T T F
7. ~ (P & Q)
~ (P & Q)
F T T T
T T F F
T F F T
T F F F
9. (A v (B ⊃ C))
(A v (B ⊃ C))
T T T T T
T T T F F
T T F T T
T T F T F
F T T T T
F F T F F
F T F T T
F T F T F
Valid. There is no row where the premise is true and the conclusion is
false.
3. (M ≡ ~ N) ; ~ (N & ~ M) /.: (M ⊃ N)
(M ⊃ ~ N) ; ~ (N & ~ M) /.: (M ⊃ N)
T F F T T T F F T T T T
T T T F T F F F T T F F ←
F T F T F T T T F F T T
F F T F T F F T F F T F
Invalid. In row 2, both premises are true, and the conclusion is false.
5. (A ⊃ A) /.: A
(A ⊃ A) /.: A
T T T T
F T F F ←
Invalid. In row 2, the premise is true, but the conclusion is false. How
could this be? It is obviously true that if A is true, then A is true. But is
A really true? That’s what the conclusion says. And that doesn’t follow
from the conditional.
7. ~ R ; (S ⊃ R) /.: ~ S
~ R ; (S ⊃ R) /.: ~ S
F T T T T F T
F T F T T T F
T F T F F F T
T F F T F T F
Valid. There is no row where the premise is true and the conclusion is
false.
11. (P ⊃ Q) /.: R
(P ⊃ Q) /.: R
T T T T
T T T F ←
T F F T
T F F F
F T T T
F T T F ←
F T F T
F T F F ←
Invalid. On at least one row, the premise is true and the conclusion is
false.
• Note! There is an error in the text. Numbers 13 and 14 have no
conclusion and, therefore, are not arguments. Thus, they cannot be
tested for validity. Here, we have supplied conclusions to both, and we
have provided the answer to number 13.
Invalid. Let A be “false” and B be “true,” and you will see that it is
possible to construct a true premise and a false conclusion.
Select Answers
(P v ~ Q) ; (R ᴝ ~ Q) /.: (~ P ᴝ R)
T T F T T F F T F T T T
T T F T F T F T F T T F
T T T F T T T F F T T T
T T T F F T T F F T T F
F F F T T F F T T F T T
F F F T F T F T T F F F
F T T F T T T F T F T T
F T T F F T T F T F F F ←
Invalid.
(~ (Y & O) v W) /.: (Y ᴝ W)
F T T T T T T T T
F T T T F F T F F
T T F F T T T T T
T T F F T F T F F ←
T F F T T T F T T
T F F T T F F T F
T F F F T T F T T
T F F F T F F T F
Invalid.
(E v F) ; (E ᴝ F) ; (C & D) /.: (F ᴝ ~ C)
T T T T T T T T T T F F T ←
T T T T T T T F F T F F T
T T T T T T F F T T T T F
T T T T T T F F F T T T F
T T F T F F T T T F T F T
T T F T F F T F F F T F T
T T F T F F F F T F T T F
T T F T F F F F F F T T F
F T T F T T T T T T F F T ←
F T T F T T T F F T F F T
F T T F T T F F T T T T F
F T T F T T F F F T T T F
F F F F T F T T T F T F T
F F F F T F T F F F T F T
F F F F T F F F T F T T F
F F F F T F F F F F T T F
Invalid.
Getting familiar with … using the short truth table method to test for
validity
Test each of the following arguments for validity using the short truth
table method.
FTTF FTTF TF F F
Invalid. In order to make the conclusion false, both P and R must be
false. But we are free to set Q’s value. If we set Q’s value to F, all the
premises are true and the conclusion is false.
TTFFTF TFF
(E v F) ; (E ᴝ F) ; (C & D) /.: (F ᴝ ~ C)
T T T T T T T T T T F F T
F T T F T T T T T T F F T
Invalid.
(~ (A & B) v C) /.: (A ≡ C)
T T F F T F T F F
Invalid.
Valid.
((Q v R) ᴝ S) /.: (Q ᴝ S)
T T T/F F F T F F
Valid.
Invalid.
Valid.
(A ᴝ C) ; (B ᴝ D) ; (S v ~ D) ; S /.: ~ A
T T T T/F T T/F T T T/F T/F T F T
Invalid.
1. A
2. (A ⊃ B) /.: B
3. B 1, 2 modus ponens
3.
1. (P & Q)
2. (R & S) /.: P
3. P 1 simplification
5.
1. ((R v S) & Q)
2. (~Q v S)
3. T /.: (Q & T)
4. Q 1 simplification
5. (Q & T) 3, 4 conjunction
7.
1. ((A ⊃ B) ⊃ (C v D))
2. ~ (C ⊃ D) /.: ~(A ⊃ B)
9.
3. (P ⊃ Q) 1 simplification
4. ~ Q 2 simplification
5. ~P 3, 4 modus tollens
6. (S ⊃ R) 1 simplification
7. ~R 2 simplification
11.
2. R /.: W
3. W 1 simplification
13.
1. A
2. (B v C)
5. D 3, 4 modus ponens
15.
2. (~Q & W)
3. (X ⊃ Y)
6. ~Y 5 simplification
7. ~X 3, 6 modus tollens
8. ~Q 2 simplification
17.
1. ~P
2. (S ⊃ R)
3. (R ⊃ Q)
4. (Q ⊃ P) /.: ~S
5. ~Q 1, 4 modus tollens
6. ~R 3, 5 modus tollens
7. ~S 2, 6 modus tollens
19.
1. ~(B v D)
2. (A ⊃ (B v D))
5. ~A 1, 2 modus tollens
6. ((E & F) & G) 3, 4 modus ponens
7. (E & F) 6 simplification
8. E 7 simplification
a.
1.
1. ((A v B) ⊃ C)
2. (F & D)
3.
1. (~P v (D v Z))
2. (~(D v Z) v B)
3. ~ B /.: ~P
5. ~P 1, 4 hypothetical syllogism
5.
2. ~(P v Q)
3. ~~ S /.: ~R
5. ~R 3, 4 hypothetical syllogism
7.
2. (A & B) /.: (B v P)
3. B 2 simplification
4. (B v P) 3 addition
9.
1. A
2. ((A v B) ⊃ ~C)
3. (~ C ⊃ F) /.: ((A v B) ⊃ F)
b.
11.
1. (~S ⊃ Q)
2. (R ⊃ ~T)
13.
2. (Q ⊃ (H v O))
3. Q /.: (B v C)
4. (H v O) 2, 3 modus ponens
5. (H ⊃ B) 1 simplification
6. (O ⊃ C) 1 simplification
7. (B v C) 4-6 constructive dilemma
15.
1. (B ⊃ (A v C))
3. B 2 simplification
4. ~A 2 simplification
5. (A v C) 1, 3 modus ponens
6. C 4, 5 disjunctive syllogism
17.
2. (C v ~D)
3. (A ⊃ B)
4. (E & A) /.: ~D
5. A 4 simplification
6. B 3, 5 modus ponens
7. (A & B) 5, 6 conjunction
8. ~C 1, 7 modus ponens
9. ~D 2, 8 disjunctive syllogism
19.
1. (F ⊃ (G ⊃ ~H))
3. (F & ~T)
4. (W ⊃ T) /.: ~H
5. F 3 simplification
6. ~T 3 simplification
7. ~W 4, 6 modus tollens
9. (G v T) 2, 8 modus ponens
1.
1. ( ~ (P ≡ R) v ~ (Q & S))
DeMorgan’s Law
3.
1. ~~ (A & B) v (Q ⊃ R)
2. (A & B) v (Q ⊃ R)
Double Negation
5.
1. ((P & Q) ⊃ R)
2. ~R ⊃ ~(P & Q)
Transposition
7.
1. (R & Z)
2. (R & Z) v (R & Z)
Tautology
9.
1. (Q v (R & S))
Distribution
3.
2. It is certainly raining.
5.
1. If it drops below 0o C, either the roads will become icy or the water line will freeze.
3. So, either the roads will become icy or the water line will freeze.
b.
If you’re worried that this inference is a good one because only one team can win, remember
that if only one team could win, then the teams are playing each other and the “or” is
exclusive. But we cannot assume the “or” is exclusive unless it is explicitly stated. In this
case, the team could be playing different teams or in different tournaments.
Why is this fallacious? Because the road’s being wet is one reason you won’t have good
traction, but it is certainly not the only reason. There may be ice on the road, or your tires may
be bald.
This is fallacious because, even though the bar’s proximity is one reason it will be safe to
drive home, there may be others. For instance, the driver is not drunk, the roads are not
slippery, it is not late at night, etc.
Strengthening the defense may be a necessary condition for winning, but it is certainly not
sufficient. You still have to play the game, your team members have to be in good shape, you
have to outplay the other team, etc.
9. Affirming the disjunct
2. She is honest.
Notice how the inclusive “or” helps us here. They could break up for a number of reasons. If
she is not honest, they will break up. That would be the valid conclusion of a disjunctive
syllogism. But if she is honest, who knows what will happen? They may break up anyway,
perhaps she was too honest; perhaps what she was honest about is a reason to break up.
Maybe being honest increases the chances they will stay together, but it doesn’t guarantee it.
5. You have to decide between looking for work after college or taking a
year after college to backpack the Appalachian Trail (which takes
between 4 and 7 months).
“Gap” years are usually the years between high school and
college when you’re not working. But some college
students take a year or so after college to do something
momentous: backpack across Europe, work for the Peace
Corps, or hike the Appalachian Trail. If you take a year
off, you will lost whatever money and experience you
might have obtained working, and on the trail you will
also risk some bodily injury. On the other hand, hiking the
Appalachian Trail is a monumental achievement, and one
you are unlikely to have the opportunity to attain (it takes
about 6 months). Plus, there are no guarantees you will get
a job right after college. Some people are out of work for 6
months to a year.
In order to assign values, let’s assume you really want to
hike the Trail and you’re not terribly concerned about the
money you will miss.
Having meaningful experiences: (+50)
Being financially stable: V(+25)
Hiking and Meaningful Experiences: P(.7)
Hiking and Financially Stable: P(.3)
Looking for a Job and Meaningful Experiences: P(.4)
Looking for a Job and Financially Stable: P(.6)
Taking a year off: ((P(.7) x V(+50)) + (P(.3) x V(+25))) =
+42.5
Not taking a year off: ((P(.4) x V(+50)) + (P(.6) x V(+25))) =
+35
Notice there aren’t any negatives in this calculation. We’ve
figured those into the values up front. Meaningful
experiences come with some risk (risk of injury, risk of
financial hardship). And being financially stable usually
comes with a lot of drudgery. Both numbers would be
higher if not for these negatives. But here we are assuming
the positives outweigh the negatives in both cases. From
this calculation, taking the year off is the better option for
you.
Chapter 8 – Generalization, Analogy, and Causation
1.
2. Blake is a politician.
3.
2. Therefore, this time you probably won’t like anything you’ve tried.
5.
3. Therefore, probably the next bit of sugar we heat will turn black.
7.
1. Every time you have pulled the lever on the slot machine, you’ve lost.
2. So, the next time you pull the lever you are likely to lose.
9.
2. Every girl I met at the bar so far tonight has snubbed me.
5. To find out how well Georgians think the Georgia governor is doing
in office, researchers polled 90% of the population of the largely
Republican Rabun County.
7. Tameeka says, “I’ve known two people who have visited Europe, and
they both say that Europeans hate Americans. It seems that people
outside the U.S. do not like us.”
This sample is not proportionate. Two people’s experiences
with Europeans is not enough to draw a conclusion about
what all Europeans think of American.
Also, the survey is invalid. We don’t know what it was about
those Europeans that led them to think Europeans hate
Americans. Were they treated poorly? Did those
Europeans say they do not like Americans or American
tourists or American politics or American education, etc.?
Is it possible that those two people were acting in an
obnoxious way, which made the Europeans they
encountered more hostile than they might have been?
1. “Every time I take this new VitaMix cold medicine, my cold goes
away. Therefore, I know it works.”
5. After watching Kareem win three tennis sets in a row, the recruiter is
sure that Kareem will be a good fit for his all-star team.
7. After acing her first exam, Simone was shocked to discover that she
only got an 85 on the second. She concludes that she must be getting
dumber.
Regression fallacy
1. Bear paw prints have five toe marks, five claw marks, and an oblong-shaped
pad mark.
2. This paw print has five toe marks, five claw marks, and an oblong-shaped
pad mark.
3.
1. The jeans I’m wearing are Gap brand, they are “classic fit,” they were made in Indonesia,
and they fit great.
2. This pair of jeans are Gap brand, “classic fit,” and were made in Indonesia.
1. At the first crime scene, the door was kicked in and a playing card was left on the victim.
2. At this crime scene, the door was kicked in and there is a playing card
on the victim.
3. Therefore, the person who committed this crime is likely the same person who
committed the last crime.
7. Everything I have read about this pickup truck tells me it is reliable and comfortable. There
is a red one at the dealer. I want it because it will likely be reliable and comfortable.
9. This plant looks just like the edible plant in this guidebook. Therefore, this plant is
probably the same type of plant that is mentioned in the guidebook.
1.
3.
5.
3. The sun rises every morning because the Earth rotates on its axis
once every twenty-four hours. As it turns toward the sun, we
experience what we colloquially call “the sun’s rising.”
5. He ran off the road because the cold medicine made him drowsy.
Running off the road caused him to hit the lamppost. The lamppost
caused the head trauma. Thus, driving while on cold medicine can
lead to serious injuries.
5. “I see the same woman in the elevator every morning. And we take
the same bus into town. Fate must be trying to put us together.”
Mistaking coincidence for causation. Even though “every
morning” is mentioned, this is not a mistake in temporal
order. The events occur at the same time. And given that
our arguer seems astute enough not to think that arriving
at the station causes the woman to show up, it is not likely
to be mistaking correlation for causation; he or she is
thinking in terms of a common cause.
But the common cause cited is “fate” (a remnant of the
ancient Greek belief in the Fates (morai), which directed
the destinies of both gods and humans). Is this the most
plausible common cause? The mistake is likely a simple
misassignment of the cause (see also false cause fallacy,
Chapter 10). The coincidence does not allow us to draw a
strong inference about the common cause. It could be that
both people live in the same building and have jobs during
normal business hours. These are coincidences. It could be
that she is stalking our arguer (which is a bit creepy),
which is not mere coincidence, but neither is it a happy
conclusion. The author seems arbitrarily to choose fate as
the cause, but this is unjustified by the evidence.
Chapter 9 – Scientific experiments and inference to the
best explanation
3. “Many people have allergic reactions. I bet they are all caused by
eating shellfish.”
O: Many people have allergic reactions.
H: Shellfish causes allergic reactions.
I: Most people who eat shellfish will have allergic reactions
while most people who don’t have any allergic reactions.
If shellfish causes allergic reactions, then most people who
eat shellfish will have allergic reactions while most people
who don’t have any allergic reactions.
C. Short answer.
5. Based on your understanding of this chapter and the last, why are
experiments so important for causal arguments?
1. You get sick after eating lobster for the first time and conclude that it
probably was the lobster.
3. Susan has to weigh her cat at the vet, but the cat won’t sit still on the
scale by herself. So, the nurse records Susan’s weight first, which is
120 pounds. Then she has Susan and her cat step on the scale, notes
that the scale now reads 130 pounds, and records the cat’s weight as
ten pounds. Which of Mill’s methods did the nurse utilize?
Method of Residues. The cat’s weight are the pounds left
over after subtracting Susan’s weight. It is the residue left
after the experiment.
5. Zoe sneezed every time she went into the basement. Her parents tried
to figure out what was causing it by vacuuming, dusting, and
scrubbing the floors, in various combinations, and having her go in
the basement afterward. Zoe still sneezed, no matter if the basement
was: vacuumed, but not dusted or scrubbed; dusted, but not
vacuumed or scrubbed; scrubbed but not vacuumed or dusted;
vacuumed and dusted, but not scrubbed; vacuumed and scrubbed, but
not dusted; dusted and scrubbed, but not vacuumed; vacuumed,
dusted, and scrubbed. One thing that stayed the same throughout the
vacuuming, dusting, and scrubbing events, however, was that the
fabric softener sheets (which gave off a strong lilac smell) were
present every time Zoe went into the basement. Zoe’s parents then
removed the fabric softener sheets and sent Zoe into the basement.
Finally, she stopped sneezing! They put the fabric softener sheets
back, and guess what happened? She sneezed again. They have since
stopped using the fabric softener sheets and Zoe no longer sneezes
when she goes into the basement. So, from this whole ordeal, Zoe and
her parents reasoned that the fabric softener sheets were what caused
the sneezing.
1. “I suddenly feel sick after eating at that restaurant. How could I tell if
it was something I ate?”
5. “When I visit some people, I get really hungry, when I visit others I
don’t. What might cause that?”
5. My boyfriend just freaked out when I asked him about his sister.
Plausible: He has a bad relationship with his sister.
This is plausible because bad sibling relationships are
common (conservative); it is independently testable and
fecund; and it has some explanatory scope because it
might explain many of his reactions related to his sister.
Implausible: He has a sexual relationship with his sister.
This is implausible because incestuous relationships are rare
(not conservative; it is not simple (there would have to be
a complex set of social factors for this to happen); it is
unlikely to be independently testable if he or his sister is
unwilling to talk about it.
3. Observation: “Hey, these two pieces of steel get warm when you
rub them together quickly. Why is that?”
Explanation A: There is a liquid-like substance called “caloric”
that is warm. When an object has more caloric it is warmer
than when it has less. Caloric flows from warmer objects to
cooler just as smoke dissipates into a room. When you rub
two pieces of steel together, the caloric from your body flows
into the steel.
Explanation B: Objects are made of molecules. Heat is a
function of the speed at which molecules in an object are
moving. If the molecules move faster, the object becomes
warmer; if the molecules slow down, the object becomes
cooler. Rubbing two metal pieces together quickly speeds up
the molecules in the metal, thereby making it warmer.
Both explanations are consistent with our experience of heat,
and both explain heat widely and in depth. Explanation B
is simpler in that it relies only on the physical elements we
already believe make up objects and does not require the
extra substance “caloric.” The important factors will be
independent testability and fecundity after we construct
some experiments.
A. Short answer
modus tollens
1. “What an idiot! Mark never reads and he plays video games all day
long. He is not qualified to advise the state’s finance committee.”
ad hominem, abusive
slippery slope
false cause
“Every swan I’ve ever seen has been white—in books, on TV, in the
9. movies, on the Internet—so all swans are white.”
hasty generalization
13. “You’ll support what we’re saying here, right? You wouldn’t want
your windows broken, would you?”
Addition: a valid rule of inference that states that, for any claim you know
(or assume) to be true, you may legitimately disjoin any other claim to it
by a rule called addition without altering the truth value of the original
disjunctive claim (not to be confused with conjunction).
Ambiguity: the property of a claim that allows it to express more than one
clear meaning.
Lexical Ambiguity: a claim in which one or more words in the claim has
multiple clear meanings. For example, the word bank in the claim, “I
went to the bank.”
Bi-conditional: a claim using the “if and only if” (or ≡) operator in
propositional logic. For example: “Knives are placed in this drawer if
and only if they are sharpened.”
Big Four Graduate School Entrance Exams: Graduate Record
Examinations (GRE), Law School Admission Test (LSAT), Graduate
Management Admission Test (GMAT), and Medical College Admission
Test (MCAT).
Cascade Logic: a key feature of conspiracy theories that starts with raising
suspicion based on common sense. If one piece of common sense holds,
then another closely related suspicion is raised. The collection of
suspicion, reinforced by multiple voices, suddenly makes a very
implausible claim seem likely to be true.
Conjunction: a claim whose truth value is determined by the “and” (or &)
operator.
Currency: refers to how recent a study is, where timelier (i.e., more
recent) findings are valued as more likely to be accurate or reliable.
There are exceptions to this heuristic, such as with historical findings.
Purpose: refers to the reason why the information was either published
or presented in the way that it was.
Descriptive Claim: a claim that expresses the way the world is or was or
could be or could have been, as in a report or explanation.
Destructive Dilemma: a valid rule of inference that states that, if you know
two conditionals are true, and you know that at least one of the
consequents of those conditionals is false (you know its negation), then
you know at least one of the antecedents of those conditionals is false
(you know its negation); example: If you know “If A then B” and “If P
then Q,” and you know “Either not-B or not-Q,” then you can derive,
“Either not-A or not-P.”
Direct Evidence: evidence that a claim is true or false that you as a critical
thinker have immediate access to, for instance, seeing that something is
red or round or recognizing that a particular claim follows from a
mathematical theorem. (See also Evidence and Indirect Evidence)
Distributed Term: in categorical logic, the term that refers to all members
of a class.
Double negation: a valid rule of replacement that states that any claim, P, is
truth-functionally equivalent to the complex claim ~~P (not-not P).
Hypothetical Syllogism: a valid rule of inference that states that, for any
two conditionals, if the antecedent of one is the consequent of the other,
then you can derive a conditional, where antecedent of the first
conditional entails the consequent of the second conditional; If you
know “If A then B,” and if you know “If B then C,” then you can
derive, “If A then C.”
Simplicity: an explanatory virtue that says that, for any two hypotheses that
purport to explain the same observation, the explanation that invokes
the fewer number of laws or entities or explanatory principles is more
likely to be true.
Mental Attitude: a feeling about the world that helps to inform our
reaction to it.
Middle Term: the term that is the subject or predicate in both of the
premises of the syllogism, but not in the conclusion. For example, B is
the middle term in the following syllogism: All A are B. All B are C.
Therefore, All A are C.
Mill’s Methods: informal experiments devised by philosopher John Stuart
Mill (1806–1873), namely, Method of Agreement, Method of
Difference, Joint Method of Agreement and Difference, Method of
Residues, and Method of Concomitant Variation.
Old School Strategy: the classic strategy for determining which of several
competing claims are justified that involves various search strategies,
identifying reputable sources, and comparing and contrasting reasons
for accepting a belief; often useful when trying to distinguish real news
from fake or misleading news.
Outlier: a data point that does not conform to the statistical pattern
(correlation line) in a correlation.
Principle of Charity: a principle which states that one should always begin
an argument by giving a person the benefit of the doubt.
Reiteration: A valid rule of replacement that states that, from any claim, P,
we can derive (or conclude) P at any point in an argument without loss
of truth value.
State of Affairs: the way the world is or was or could be or could have
been or is not.
Truth Table: a tool for expressing all the possible truth values of a claim,
whether simple or complex.
Vagueness: the quality of a word whereby it has a clear meaning, but not
precisely defined truth conditions.
enthymeme here
evidence here, here, here
see also direct evidence and indirect evidence
direct here
emotions as here
indirect here
exclusive or here, here
existential assumption here, here
expected value here
experiment here, here
see also study for different types of experiments
evidence and here
formal here
informal here
philosophical (thought) here
scientific here
explanation here, here, here, here
conservatism and here
explanatory depth and here
explanatory scope and here
fecundity and here
independent testability and here
inference to the best here, here
simplicity and here
underdetermination and here
virtues of here
explanatory depth here
explanatory scope here
extraneous material here
language here
artificial here
formal here
informal here
natural here
lexical definition here
obversion here
old school strategy here
controversial issues here
emotions here
goal of here
insiders here
national news sites here
overlap here
simple search here
variety of sources here
writing/talking here
operator (logical) here, here, here, here
major here
types of here
outlier here
parody here
see also fake news
photo and video manipulation here
post-truth (or post-fact) here
practical reason here
predicate here, here
premise here
circularity and here
indicating words and here
prescriptive (normative) claim here
principle of charity here
probability here
conditional here, here
cost/benefit analysis and here
dependent here
epistemic here
objective here
problem of induction and here
subjective (credence) here
problem of induction here
proof here, here
conditional here
indirect here, here
truth-table method of validity and here
propaganda here
proposition here, here
see also claim
propositional logic here
prudential argument here
underdetermination here
undetermined here
Copyright © Jamie Carlin Watson, Robert Arp and Skyler King, 2024
Jamie Carlin Watson, Robert Arp, and Skyler King have asserted their right under the
Copyright, Designs and Patents Act, 1988, to be identified as Authors of this work.
For legal purposes the Acknowledgments constitute an extension of this copyright page.
All rights reserved. No part of this publication may be reproduced or transmitted in any form
or by any means, electronic or mechanical, including photocopying, recording, or any
information storage or retrieval system, without prior permission in writing from the
publishers.
Bloomsbury Publishing Plc does not have any control over, or responsibility for, any third-
party websites referred to or in this book. All internet addresses given in this book were
correct at the time of going to press. The author and publisher regret any inconvenience
caused if addresses have changed or sites have ceased to exist, but can accept no
responsibility for any such changes.
A catalogue record for this book is available from the British Library.
To find out more about our authors and books visit www.bloomsbury.com and sign up for
our newsletters.