0% found this document useful (0 votes)
38 views45 pages

More Phil 210 Notes

The document discusses deductive arguments and logical reasoning. It defines key terms like statements, premises, conclusions, validity, soundness, and different types of arguments. It also covers logical connectives like "and", "or", conditionals, and fallacies. Different valid and invalid argument forms involving conditionals, disjunctions, and conjunctions are presented. Necessary and sufficient conditions are also explained. The overall document provides an introduction to deductive logic and reasoning.

Uploaded by

Kost4060
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views45 pages

More Phil 210 Notes

The document discusses deductive arguments and logical reasoning. It defines key terms like statements, premises, conclusions, validity, soundness, and different types of arguments. It also covers logical connectives like "and", "or", conditionals, and fallacies. Different valid and invalid argument forms involving conditionals, disjunctions, and conjunctions are presented. Necessary and sufficient conditions are also explained. The overall document provides an introduction to deductive logic and reasoning.

Uploaded by

Kost4060
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Lesson 1: Deductive Argument

Statement, Assertion, Proposition


A statement (assertion, proposition, claim) is anything that can either be true or false.

For example:

“Dave is tall.”

“Dave should stay in school.”

A sentence like “Dave, pass the salt.” is not an assertion. It makes no sense to say this is true (even
if he does what is asked).

Two Definitions of an Argument


The book gives two definitions of an argument. Neither of these definitions identifies
arguments with heated disputes between two people. This sense of argument is not what
we are interested in for this course.

Definition 1 Definition 2

In the first definition an argument is something The second definition is more idealized, but
given by a particular speaker, in a given context, often very helpful for understanding arguments.
in order to convince an audience of a certain
point. An argument is a series of statements (premises)
that are intended to lend support to a conclusion.

Validity
An argument is valid if it is not possible for the premises to all be true and the conclusion
false.

If Stephen Harper is a fish, then he spends his life under water.


He is a fish.
So he spends his life under water.

In the example above, the premises imply the conclusion (it is valid). But there is an
obvious problem with the argument. Not all of the premises are true (Stephen Harper is
not a fish). Often, we are interested in more than validity. We are interested in soundness.

Soundness
An argument is sound if it meets two conditions:
1) It is valid.
2) All of its premises are true.
 
Note on Terminology
▪ Validity and soundness apply to arguments (not to assertions).
▪ Truth and falsehood apply to assertions (not to arguments).
▪ Premises imply a conclusion.
People infer a statement.

Types of Arguments

Linked: The premises interrelate in order to form a single case for the conclusion.

The argument contains one or more sub-conclusions that in turn function as


Sequential:
premises for the overall conclusion.

Convergent: The premises provide multiple distinct lines of support for the conclusion.

Recognizing Validity
The following passage has many nonsense terms, but it still has an identifiable argument
structure.

What type of Either snarfs do not binfundle, or they This has the form:
argument is podinkle. ▪ Either not-p or q.
it? If snarfs binfundle, then they ▪ If p, then r.
rangulate. ▪ Not-q.
Snarfs do not podinkle. ▪ Therefore,
Therefore, ▪ Not-p.
Snarfs do not binfundle. Therefore, ▪ Therefore,
Snarfs do not rangulate. ▪ Not-r.

Is it a valid Demonstrate the argument’s invalidity snarfs = foxes


argument using the method of counterexample: binfundle = lay eggs
form? Construct a parallel argument in podinkle = have scales
which each of ‘snarfs’, ‘binfundle’, rangulate = reproduce
‘podinkle’ and ‘rangulate’ are 1 Either foxes do not lay eggs, or they have
replaced by English terms, with the scales. (True)
result that each of the premises 2 If foxes lay eggs, then foxes reproduce.
(including the intermediate (True)
conclusions) is true, but the final 3 Foxes do not have scales. (True)
conclusion is false. Therefore,
4 Foxes do not lay eggs. (True) Therefore,
5 Foxes do not reproduce. (False)

Where does It goes wrong at the last stage. The first inference is fine; it is disjunctive syllogism
the argument using premises 1 and 3.
go wrong? The second (invalid) inference moves from “If p then q” and “not-p” to “not-q”.
In fact, this is the logical fallacy known as denying the antecedent. (If it is raining,
there are clouds; it is not raining; therefore, there are no clouds. Yuck.)

Being Logical
▪ Does not mean being sensible.
▪ Logic in general is the study of methods of right reason.
▪ Logic in particular is a set of inference rules.
 There is more than one set, although most share some common elements.
▪ “Laws of Thought” …are not, necessarily:

Law of Identity:

p
if and only if
p

Law of Non-Contradiction:

Not both p
and not-p

Law of Excluded Middle:

Amounts to Double Negation Elimination:


Either p or not-p
not-not-p = p

Double Negation Elimination and Excluded Middle


For example:

▪ “Is it moral to nap on a Saturday afternoon?”



▪ “It’s not immoral.”

▪ “Oh, stop mincing words!”

Source: Microsoft Clip Art Gallery

6 “Is that shirt definitely green?”


7
8 “Well, I wouldn’t say it’s not green.”
9
10 “Haven’t you heard of the Laws of
Thought?”
11
12 (Cases of vagueness are often thought
to count against the law of the excluded
middle).

Source: Microsoft Clip Art Gallery

Kinds of Compound Statements

Conjunctive statement, or Conjunction Disjunctive Statement

▪ A compound statement containing two 13 Disjunctive statement, or disjunction: A


sub-statements (called conjuncts), compound statement containing two sub-
joined with the word ‘and’, or statements (called disjuncts), joined with the
near-synonyms like ‘as well as’. word ‘or’, or near-equivalents like
▪ A conjunction is true if and only if all ‘alternatively’.
of its conjuncts are true. 14 A disjunction is true if and only if at least one
of its disjuncts is true.
▪ Disjunction (‘or’) can be understood inclusively or exclusively.
▪ Inclusive ‘or’: at least one of the listed disjuncts is true. (Hence the inclusive
disjunction is also true if both its disjuncts are true.)
▪ Exclusive ‘or’: one and only one of the disjuncts is true.
For most purposes, it is best to treat ‘or’ inclusively, as far as the meaning of the word
itself, and to regard exclusiveness as arising from implicature.
Some Valid Disjunctive Argument Forms

Disjunctive syllogism:

1. P or Q
2. Not Q
Therefore,
3. P

Constructive dilemma:

1.  P or Q
2.  If P then R
3.  If Q then S
Therefore,
4.  R or S
Basic difference between disjunctive and conjunctive statements: it is easier for a
disjunctive statement to be true than for a conjunctive statement. A disjunctive statement
is true provided any of its disjuncts are true, while a conjunctive statement is true
provided all of its conjuncts are true.

Some Valid Disjunctive Argument Forms (Cont'd)


Conditional statements
i) Basic conditional:

If P then Q.P is the antecedent;Q is the consequent.

 
The conditional is false when P is true but Q is false, and is true in all other cases.
 
ii) Subjunctive conditionals:

If P were to be true
Then Q would be true

Often treated similarly to basic conditionals, but there are some differences. The
following sort of inference fails subjunctively:
If P then Qif Q then RTherefore, if P then R
Some Valid Conditional Argument Forms

Modus ponens:

If P then Q
PTherefore, Q

Modus tollens:

If P then Q
Not Q
Therefore, not P

Invalid Conditional Forms


These both get their names from what the second premise does.

Denying the antecedent:

If P then Q
Not PTherefore, not Q

Affirming the consequent:

If P then Q
Q
Therefore, P

Necessary and Sufficient Conditions


The concepts of necessary and sufficient conditions are very important.
 
To say A is necessary for B is to say you cannot have B without A.

For example:

- Being over 5 feet tall is


necessary for being over six feet
tall.

To say A is sufficient for B is to


say if you have A, then you also
must have B.

- Being over 6 feet tall is


sufficient for being over 5 feet
tall.

(You may notice, as demonstrated in the examples, that if A is necessary for B, then B is
sufficient for A.)

Necessary and Sufficient Conditions (Cont'd)


Many instances of communication are explicitly or implicitly arguments, even though
they may not have clearly designated premises and conclusions. But not everything is an
argument. Some utterances are merely assertions; others may resemble arguments, but
are better understood as explanations.
 
An argument gives someone reasons why they ought to believe a claim.
For example:

If I say: “The car rolled down the hill because


it was not parked properly.” I am explaining
why the car rolled away. I am not giving you
an argument to convince you that the car rolled
away.
 
 
 Lesson 2: Evidence Adds Up

Deductive Reasoning
Check out the learning game Jane and reflect on this:
▪ Is this argument sound? Assuming the premises are true, this amounts to asking if the
argument is valid.
▪ Is it valid? Is it possible for the premises to all be true and the conclusion false?
▪ Of course, but it is not an example of poor reasoning.
Often we have to go beyond what is deductively implied by our premises.
 
Deductive Reasoning (Cont'd)
In deductive reasoning, the conclusion is contained in premises.
In a deductively valid argument, the truth of the premises is sufficient for the truth of the
conclusion.
In a deductively valid argument, all of the information stated in the conclusion is already
implicit in the premises. So, in a sense, a deductive argument cannot really tell us
anything new.
Gottlob Frege (the inventor of modern logic)
noticed that it is not always immediately obvious
what follows deductively from a set of
statements.

He expressed this containment quite poetically


by saying that premises contain their
conclusions:

“like a plant in a seed,


not like a beam in a house.”
Gottlob Frege, inventor of modern logic.
Source: Wikimedia Commons
 
Nonetheless, we often need to go beyond what is strictly implied by what we already
know.
Ampliative Arguments that go beyond what is deductively implied by the premises are called
Arguments ampliative arguments.

Cogency
Some invalid arguments are just really bad arguments (involving perhaps a logical
fallacy).
Some invalid arguments, however, give you some good (although not conclusive) reasons
for believing a claim. These are called cogent arguments.
Whereas validity is an absolute notion, cogency is a matter of degree.
 
Inductive Reasoning
Inductive reasoning is extremely common both in science and everyday life. It is a type
of ampliative reasoning of the form:
All cases of type A so far have had feature B,
so a new case of type A will also have feature B.

For example:

All humans up to now have been


mortal.

Queen Elizabeth II is human.

So she is mortal.

Queen Elizabeth II
Source: Wikimedia Commons
Notice that the premises do not guarantee the conclusion (perhaps she is the first
immortal human), but they certainly give us good reason to believe the conclusion.

Inductive Reasoning (Cont'd)


Remember, we saw that in deductive arguments the conclusion is already, in a sense,
contained in the premises. So, adding more premises will never make a valid argument
invalid.

If conclusion P is contained in A and B (if A and B are true, P must be too),


then P is contained in A, B and C – no matter what C is.
Since if A, B and C are true, then A and B are true, so P is too.

This is not the case for ampliative arguments. We might have a good cogent argument for
the claim Q, but upon finding out more information, it might be most reasonable to
abandon the belief in Q.
 
Inductive Reasoning (Cont'd)
Consider the argument at the beginning of this section (involving Jane and her yoga mat).
It would certainly be very reasonable
in this situation to believe that Jane
went to a Yoga class.

However, if we find a note in Jane’s


house that says “I went out to return
the yoga mat I bought. I don’t really
want to take yoga after all.”

Source: Microsoft Clip Art Gallery

Here we have new information that severely undermines what were quite good reasons
for thinking she was at a yoga class.

State of Information
Since new information might undermine good reasons we previously had for a claim,
whether it is reasonable to believe something depends on our total state of information.
A belief is credible if your total state of information counts as reason to believe it.
If the evidence points to something’s being true and you choose not to believe it, or if it
points to it being false and you choose to believe it anyway, then you are being
unreasonable.
 
Defeasibility
Almost all of what we believe is defeasible.
That is, for almost anything we believe it is possible that new evidence would make it
unreasonable to continue to believe it.
For example:

To take an extreme example, I believe that there are


no talking dogs. In fact, if I saw what looked like a
talking dog, I would think it was some kind of a trick.

But that is not to say that no amount of evidence


would not cause me to revise my belief.
It would obviously take an enormous amount of Source: Wikimedia Commons
evidence to cause me to revise this belief.

If I saw them every day, had long conversations with them at times, and I seemed otherwise sane,
it might be reasonable to give up my belief.
To take a slightly less extreme example, I believe my mother has never robbed a bank.
But I could imagine experiences that would cause me to revise this belief.
A mark of being reasonable is being ready to change one’s mind in light of new evidence.
Abduction
We have looked at deduction and induction. Another form of reasoning is called
abduction.
The name abduction was proposed by Charles Sanders Pierce. Abduction is reasoning to
the best explanation. If some claim, if true, explains a lot of what we already know, then
that is good reason for accepting the new claim.
For example:

Newton’s theory of gravitation


explained falling bodies on earth, the
motion of the planets (and moons) and
even the tides.

That it explained such diverse


phenomena was good evidence for its
truth.

Source: Wikimedia Commons

Of course, there are also more everyday uses of abduction:


▪ Dave wakes up and expects Jill to be home, but she is not in the house.
▪ She does not usually leave for another 45 minutes.
▪ The bag that Jill usually takes to work is still in the house.
▪ There is no coffee left, and Jill really needs her coffee in the morning.
▪ Dave may reason to the best explanation here and conclude that Jill ran out to buy
coffee at the corner store.

Context of Discovery and Context of Justification


It is important to separate the question of where the idea for a claim came from and what
the evidence for it is.
For example:

If a scientist had the original idea for a


theory after taking drugs and being told
the outlines of the theory by a
hallucination of a floating dolphin, this
affects only the context of discovery.

It does not affect the justification for the


theory.

Source: Flickr

What evidence there is for or against an idea is independent of the origin of the idea.

Arguments from Analogy


Arguments from analogy are very common.
When evaluating an argument from analogy, the important question to ask is whether
there is an important disanalogy between the two cases.
That is, is there a relevant difference between the two cases that blocks the intended
conclusion from following?

Mill’s Methods

If there is only one factor F in common between two situations in which effect E
Method of agreement:
observed, then it is reasonable to believe that F causes E.

If E is observed in situation S1, but not in S2, and the only relevant difference
Method of difference: between them is that S1 has factor F and S2 does not, then it is reasonable to
believe that F causes E.

Joint method of
If in a range of situations E is observed when and only when F is present, then it
agreement and
reasonable to believe that F causes E.
disagreement:

If the degree to which E is observed is proportional to the amount


of F present, then it is reasonable to conclude that F is causally related to E. (We
Method of co-variation:
cannot be sure is F causes E, E causes F
or there is a common cause for both of them.)

Methods of residues (this If we know that G causes D (but not E), and in all cases where we see G and F w
applies to cases where see both E and D, then we can conclude that F
we cannot isolate F all on likely causes E.
its own):

Proving a Negative
It is often said that you cannot prove a negative. There is really no good reason for this.

In different areas, we have different standards of proof.


In mathematics, there are quite exact standards for what counts as proof. And
you can prove negatives!
There is no second even prime. Nothing outside mathematics can be proved
First, let’s examine the mathematically.
notion of ‘proof’. Almost all claims about the empirical world are defeasible, but we still talk of
proof here.
If we say there is proof that someone lied under oath, we are using a perfectly
reasonable notion of proof here (even if it does not amount to mathematical
certainty).

Second, let’s say I want to


prove that there are no
talking donkeys.
What makes this hard to prove is not that it is a negative, but its general
character.
Saying there are no talking donkeys amounts to claiming that everything in the
universe is not a talking donkey. No matter how many things I examine, one is
free to question if one of the vastly many things in the universe that I have not
yet inspected is a talking donkey.
If we limit the generality (but keep the negative character), it becomes easy to
prove.
If you claim I cannot prove that there is no talking donkey in my office right
now, then you are being unreasonable.
Notice if I make a universal that involves no negation, it can be just as hard to
prove. If I say “all adult donkeys are larger than my thumb”, this is just as hard
to prove as the claim that there are no talking donkeys.

Source: Flickr

Lesson 3: Language, Non-Language and Argument


Doing Things with Words
In Chapter 1 of your textbook, we looked at identifying the truth conditions of sentences.
It is important to be able to recognize the literal content of a sentence, but often the point
of an utterance is something other than communicating the literal content.
For example:

I might say: “I swear to tell the truth, the


whole truth and nothing but the truth”
(in the right context).

By saying the words, I am doing


something. I am not trying to tell the
audience anything. I am taking on a
Source: Microsoft Clip Art Gallery
commitment.
Talking is not just about communicating the literal content of our sentences. There are, of
course, many purposes for language. In fact, there are clearly many situations in which
the purpose of making an assertion is to suggest the exact opposite of its literal content.

Speech Acts
Among the many things we can do with language are commanding, questioning and
asserting. Each of these has a grammatical mood associated with it.

Imperative mood: Go to the party.

Interrogative mood: Are you going to the party?

Indicative mood: You went to the party.


Speech Acts and Arguments
When we present a nicely reconstructed argument, all of the premises are explicitly stated
in the indicative mood.
Real arguments often are not like this.

Rhetorical Questions
Sometimes arguments contain a premise that is in the form of a question.
For example:

Do you care about your child’s health?

If you cared about your child’s health,


you would not let them eat at McDonald’s.
Source: Microsoft Clip Art Gallery
So don’t bring your children to McDonald’s.

The person putting forward this argument is not wondering whether parents care about
the health of their children. Here the rhetorical question is just a stylistic variant of the
assertion “You care about the health of your children”.
Rhetorical Questions and the Burden of Proof

Source: Wikimedia Commons


If someone tells you that you should buy a Mazda, you would expect them to be able to
justify their claim.
If someone says: “Why not buy a Mazda?”, they are suggesting that you should buy one.
But now the speaker is not committed to justifying the claim that you should buy one.
The speaker has placed the burden of proof on you to disprove the claim that you should
buy a Mazda.
This is not a case of a simple stylistic choice. The use of a rhetorical question here is a
questionable rhetorical move.
 
Presuppositions
In many arguments, much is not actually stated, but is presupposed.
When we say some things, we can often presuppose many others.
For example:

To take a classic example, what is presupposed by the following question:


▪ Have you stopped beating your wife?

Rhetoric
Rhetoric, for our purposes, are those aspects of a speaker’s language that are meant to
persuade, but have no bearing on the strength of the argument.
For example:

The boxer in blue is far stronger,


but the one in red is sly.

The boxer in red is sly,


but the boxer in blue is far stronger.
These two sentences report the same facts. ‘But’ works like ‘and’, except that it places
emphasis on what comes after.
So they both literally say that one boxer is sly and the other is far stronger.
However, the rhetorical effect is clear in that they suggest very different things.

Word Choice
For example:

Dave has to miss bowling tonight because he is going to


dinner at his mother’s house.

Dave can’t make bowling tonight because he is eating


dinner with his mommy. Source: Microsoft Clip Art
Gallery
Both of these sentences have the same literal content (the same truth conditions). Of
course, what they suggest is very different.
 
Quantifiers, Qualifiers and Weasel Words
For example:

Dave is a good driver.

Dave is a fairly good driver.


The qualifier “fairly” here is a weasel word.
The first sentence is clearly false if Dave has one accident every year (where he is at least
partially at fault).
It is unclear what the truth conditions of the second sentence are. One might think it is
true even in the case just described.

Quantifiers
Quantifiers like “many”, “lots” and “some” can likewise make the truth conditions
unclear.
For example:
Some of the hundred new law students are
women.
By our ordinary standards, this is true if at
least two or three are, and not all of them
are.
But things are not so clear.
It would be at the very least misleading to
say this if 98 of the new law students were Source: Microsoft Clip Art Gallery
women.
 
Do lots of children like beets?
That depends on what counts as lots.

Version of the Sorites Paradox

Source: Microsoft Clip Art Gallery


Some people are clearly short.
If someone is short, then someone just 1/10th of a millimeter taller is also short.
 
Now imagine a long line of people starting with someone who is clearly short and ending
with someone who is clearly not short, but each person in the line is just 1/10th of a
millimeter taller than the last.
 
But the first two principles imply that everyone in the line is short.

Version of the Sorites Paradox (Cont'd)


Despite the problem illustrated by the sorites paradox, vague predicates are ubiquitous.
Bertrand Russell famously said: “Everything is vague to a degree you do not realize till
you have tried to make it precise.” Just because something is vague does not mean it has
no clear cases.
For example:

If something is somewhere between


red and orange, it may be a
borderline case of something red.

Source: Flickr
If something is vague, like red for example, then it has borderline cases and clear cases.
But there are also cases where it is unclear if it fits into clear case or a borderline case. It
is unclear if this object is a clear case of a red object or a borderline case (there can be
borderline cases of borderline cases!) The line between the clear cases and the borderline
is itself vague. Philosophers call this higher-order vagueness.
In the moral domain, things are famously vague, but again there are clear cases when
something is unjust (for instance).
Ambiguity
While vagueness involves the problem of drawing sharp boundaries for a concept,
ambiguity arises when a written or spoken sentence can be given two (or possibly more)
distinct interpretations.
For example:

The boy was standing in front of the statue of


George Washington with his sister.

Source: Wikimedia Commons


This is an example of syntactic ambiguity (and bad writing).
Is it a statue of George Washington and his sister, or is it the boy’s sister?
The ambiguity arises due to the (poor) construction of the sentence. Lexical ambiguity is
when a string of spoken sounds or written letters have more than one possible meaning.
 
Dave took his pale green coat because it was lighter.
Here is Dave’s choice based on colour or thickness (weight)?

Homonomy vs. Polysemy


If lexical ambiguity involves two meanings that are not closely related, it is called
homonomy.
When the two meanings are closely related, it is known as polysemy. Polysemous uses
can often set up equivocations.
An equivocation is a fallacy which plays on an ambiguity.

Enthymemes
An argument that has certain implicit premises is called an enthymeme. Almost all
arguments we actually come across fall into this category.
Consider:

Jane must be sick, since she is not at school


and it is not like her to miss school for no
good reason.

Source: Microsoft Clip Art Gallery


The conclusion “Jane is sick” does follow from the premises. It is implicitly assumed that
other good reasons for Jane to be absent (such as a death in the family, etc.) do not
obtain.

Recognizing Arguments
When trying to recognize an actual argument in practice, it helps to be able to identify
premises and conclusions. These are not meant to be exhaustive lists.

Premise indicators: For, since, because

Conclusion indicators: Therefore, so, thus, hence, clearly, it follows that

Moral Arguments

Claim Definition Example

A claim about how things are in the


Descriptive claim Dave went to college.
world.

A claim about how things ought to Dave should stop smoking pot
Normative claim
be. all the time.

The Naturalistic Fallacy


David Hume famously said that you cannot get an ought from an is.
That is to say, you cannot derive a normative claim from purely descriptive premises.
An argument of the form:
That is how things are, so this is how things should be.
Is said to commit the naturalistic fallacy.
For example:

The cheetah, the fastest land animal, can only


attain speeds of 120 km/h, so humans should not
drive more than 120 km/h.

As it stands, this argument clearly commits the naturalistic fallacy.

LESSON 4: FALLACIES

Fallacies: Familiar Patterns of Unreliable Reasoning


Logical and quasi-logical fallacies: diagnosed in terms of argument structure
Evidential fallacies: failure to make conclusion reasonable even in inductive or heuristic
terms
Procedural or pragmatic fallacies: a matter of how rational exchange is conducted, if it is
to be reliable, fertile, etc.

Logical Fallacies: Invalid Conditional Forms


Affirming the consequent:
1. if P then Q.
2. Q.
Therefore, P.

Only if the product is faulty is the company liable for damages. The product is faulty,
though. So the company is liable for damages.
▪ Affirming the consequent.
▪ The first premise is equivalent to “If the company is liable for damages, then the
product is faulty.” So the second premise affirms the consequent.

Denying the Antecendent:


1. If P then Q.
2. Not P.
Therefore, not Q.

If love hurts, then it’s not worth falling in love. Yet all things considered love don’t hurt.
Thus, it is indeed worth falling in love. Denying the antecedent.

Scope Fallacy
Failing to Distinguish between, for example:
▪ Everybody likes somebody.
▪ There is somebody whom everybody likes.

For example:

“A woman gives birth in Canada every


three minutes. She must be found and
stopped!”

 Scope fallacy (which typically includes syntactic ambiguity).


 “Every three minutes in Canada, some woman or other gives birth” versus “There
exists some particular woman in Canada who gives birth every three minutes.”

Equivocation
For example:
In times of war, civil rights must sometimes be
curtailed. In the Second World War, for example,
military police and the RCMP spied on many
Canadian citizens without first getting warrants.
Well, now we are locked in a war on drugs, battling
the dealers and manufacturers who would turn our
children into addicts. If that means cutting a few
corners on civil rights, well, that is a price we judged
to be worth paying in earlier conflicts.

The Second World War was an actual war. The so-called “war on drugs” is a metaphor
for attempts to reduce or eliminate the trade in illegal drugs. There is no reason to think
that any particular feature of an actual war should also be a feature of a metaphorical war.
The argument equivocates on the word “war”.
 
Evidential Fallacies
▪ Typically, evidential fallacies are deductively invalid, but are only interesting as
fallacies because they are also inductively unreliable.
▪ Some arguments, though strictly logically invalid, are legitimately viewed as at least
raising the probability of their conclusions. But even by this weaker standard,
some kinds of arguments are fallacious.
▪ Argument from Ignorance (Argument from Lack of Evidence):
 1. There is a lack of evidence that P.
Therefore, not P.
Fallacy?
 Argument from ignorance is always a logical fallacy, but that is not its interest

If the A.I. was a fallacy because it is logically invalid, then it would be fallacious in the
same way as the following argument:

1. There is a lot of evidence that P.


Therefore, P.

 This argument too is invalid. The premise can be true while the conclusion is
false.
 But the A.I. has a different problem.

▪ The latter argument is evidentially reasonable. Is the argument from ignorance?


▪ Only sometimes an evidential fallacy. The quality of an argument from lack of
evidence depends on how informed we are – how hard we have looked for
evidence.
▪ “Absence of evidence is not evidence of absence.” Is this generally true?
▪ True or false: There exist backwards-flying hippogriffs who solve calculus puzzles
while delivering pizza to the president of Bolivia.
 
 An argument from lack of evidence is reasonable when it can correctly be framed in the
form of a Modus tollens argument:

1. If P were true, then we should expect to find evidence that P by


investigative means M.
2. Using investigative means M, we have been unable to find evidence that P. Therefore,
3. There are good grounds to regard P as untrue.

The truth of (1) is crucial – requiring us to have reason to regard M as an appropriate


means of revealing whether P is true.

More Evidential Fallacies


Fallacy of appeal to vicarious authority:

Professor X said that P.


Therefore, P.

What are the standards for genuine expertise?

For example:
Jonathan Wells, somewhat famous as one of only a few relevantly credentialed PhDs who
rejects evolutionary theory in favor of theistic creationism:
▪ “Father encouraged us to set our sights high and accomplish great things. He also
spoke out against the evils in the world; among them, he frequently criticized
Darwin's theory that living things originated without God's purposeful, creative
activity…Father's words, my studies, and my prayers convinced me that I should
devote my life to destroying Darwinism…When Father chose me (along with
about a dozen other seminary graduates) to enter a PhD program in 1978, I
welcomed the opportunity to prepare myself for battle.”

Standards for evaluating expert opinion:


▪ Relevant expertise
▪ Recent expertise
▪ Reason to believe that the opinion flows from the expert knowledge rather than from
other commitments or motives (compare: Jonathan Wells example)
▪ Degree of consistency with broader expert opinion

Notice that knowing enough to evaluate expert opinion by these standards requires you to
learn something about the field – that is, independently of believing the specific opinion
in question.
 
 Fallacy of Appeal to Popular Opinion
1. Everybody believes that P.
Therefore, P.

▪ Everybody might be wrong. (It would not be the first time.)


▪ Notice the interesting case of argument from majority opinion among experts:
▪ Here too the inference from “Most relevantly defined experts say that P” to “It is true
that P” is logically invalid.
▪ But as an evidential argument, this one is much stronger than the case of a single
authority, very much stronger than the case of an irrelevant authority, and vastly
stronger than the case of mere popular opinion.
▪ In general, it is prima facie (that is, at first glance) rational to believe what the majority
of experts in a field assert.
▪ This is, of course, always defeasible.
▪ Post hoc ergo propter hoc: After; therefore because of.
 1. I walked under the ladder and then my nose bled.
 Therefore, walking under the ladder caused my nose to bleed.

Procedural or Dialectical Fallacies: Fallacies Related to the Practice of Arguing


Begging the Question (Circular Reasoning):

▪ P
▪ Q
▪ R
Conclusion: Q (or P, or R)

Usually the circularity is implicit.


 
What makes question-begging unique among fallacies?
 Simplest case: P, therefore P.
 Valid…and for any true P, also sound.
 Nature of the fallacy diagnosed in terms of argumentation as a practice.

Procedural or Dialectical Fallacies (Cont'd)


▪ Question-begging via slanting language: describing a situation in terms that already
entail or suggest the conclusion for which one is arguing.
▪ Some bleeding hearts worry that it is immoral in wartime to leave loose ammunition
and explosives in plain sight, then shoot anyone who picks them up. But believe
me, such terrorists would shoot our soldiers if they had the chance. For anyone
with common sense it is obvious that you kill the terrorists before they kill you.
▪  Persuasive definition/slanting language, and a non sequitur.
- At issue is whether someone who just picks up ammunition should be
considered a "terrorist".
- Moreover, the appeal to "common sense" is a red flag; it simply does
not follow that one should kill even a known enemy at every opportunity.
 
“Capital punishment is wrong. The fact that a court orders a murder doesn’t make it
okay.”

The term ‘murder’ just means wrongful killing. No supporter of capital punishment ever
argued that a court’s ordering a murder makes it okay; they argue that a court’s ordering
a killing, under the appropriate circumstances, does not count as murder.

By labeling capital punishment ‘murder’ rather than arguing for that label independently,
one largely assumes the truth of the conclusion in this example (that capital punishment is
wrong).
 
▪ Similarly: ‘pro-life’ versus ‘pro-choice’
▪ ‘Anti-choice’ versus ‘anti-life’
▪ The Taliban were freedom fighters when attacking Soviet forces; when attacking
American forces, they are terrorists.

▪ Straw man fallacy: Attacking an argument or view that one’s opponent does not
actually advocate.
▪ Often the result of ignoring the principle of charity.
▪ Deliberate or not, it is tempting to interpret one’s opponent as having a position easier
to refute than the actual position.
 
Metaphysical materialists believe that all that exists is material; there are no immaterial
souls or spirits. But what about the human mind? If materialists are right, human beings
are just a bunch of organic chemicals stuck together, a collection of physical particles.
But how could a pile of molecules think, or feel? The grass clippings I rake from my yard
are a pile of molecules; should I believe that a pile of grass clippings feels hope, or thinks
about its future? Materialism asks us to believe that we are just a collection of physical
parts, and that is simply not plausible.
 
 Straw Man
▪ Presumably materialists hold that all objects are materially constituted, and that some
of these material bodies have minds. There is no reason to ascribe the view that
all material bodies have minds, which is what the arguer does in the passage. So,
ridiculing this idea does not really engage materialism.
 
Ad hominem fallacy: Appealing to some trait of the arguer (usually a negative trait, real
or perceived) as grounds to reject their argument.
▪ Counts as a fallacy when the alleged trait is strictly irrelevant to the argument’s
cogency.
▪ If the arguer is offering one or more premises from personal authority, for example, it
is not a fallacious ad hominem to point out relevant facts about the arguer: e.g. a
known tendency to lie, or demonstrated failures of authority in the relevant
domain.
 The credibility of the speaker can be relevant to claims the speaker makes, but not
to the validity of the argument the speaker gives.

▪ Ad hominem is often mistaken for mere insult.


▪ In fact, the fallacy is committed when any mention is made of the arguer, including
ostensibly positive characteristics, but only when such mention is given instead of
argument.
▪ Ad hominem is just one species of genetic fallacy: the fallacy of focusing on the
origins or source of an argument or thing rather than the properties of the
argument or thing itself.
 Al Gore talks about global warming, but he lives in a big house that uses lots of
electricity. Therefore, global warming is a fib.
 Saying “bless you” after someone sneezes originated from the belief that an evil
spirit could enter you after you sneeze. So, when you say that, you are being
superstitious.
- Ad hominem is often a species of argument by appeal to emotion: inferring an
unwarranted conclusion under the cover of premises that elicit strong emotions (e.g. fear,
anger, patriotism, pride, etc.).

Partly Logical, Partly Procedural


▪ Fallacies of the complex question: Asking questions in way that presupposes or
predetermines certain answers. Parallels to false dichotomy.
▪ Loaded question: “Yes or no: have you renounced your criminal past?”
▪ Either a simple answer of yes or no seems to concede that the respondent has a
criminal past.
▪ Other fallacies of complex questions relate to behavior of disjunctions in evidential or
decision contexts.
 
Outliers: Fallacies That Do Not Fit Well into This Schema
▪ False dichotomy (or false dilemma or bifurcation): Assumption that there are only two
relevant possibilities, when in fact there may be more.
▪ Such an argument contains a false disjunctive premise.
▪ Actually a valid argument form: disjunctive syllogism.

1. A or B.
2. Not A.
Therefore, B.

 But it is important that P1 be true.


▪ “There are some problems with the germ theory of disease. Therefore, it is most
reasonable to believe that disease is caused by impure thoughts.”
▪ Implicit false dichotomy: Either disease is caused by germs, or disease is caused by
impure thoughts.
Fallacies of Composition and Division:
▪ Both fallacies are a matter of the relation between a whole and its parts.
▪ The fallacy of composition occurs when we reason: The parts each (or mostly) have
property X; therefore, the whole has property X.
▪ The fallacy of division runs in the other direction: The whole has property X;
therefore, its parts have property X.

LESSON 5: CRITICAL THINKING ABOUT NUMBERS

Reasoning with Numbers


▪ Much, perhaps most, public reasoning and persuasion using numbers
uses them in a highly representative way: some complex state of
affairs is boiled down to some number.
▪ Is that a big, small, worrisome, reassuring, surprising or intelligible
number? It depends on how well we understand the state of affairs it
represents, and on how accurate it is.
 
 Numeracy
(via Joel Best, Damned Lies and Statistics)
▪ CDF Yearbook: “The number of American children killed each year by guns has doubled
since 1950.”
▪ Claim as written in the journal: “Every year since 1950, the number of American children
gunned down has doubled.
▪ CDF: n deaths in 1950; therefore 2n deaths in 1994.
▪ Journal article: n deaths in 1950; therefore n x 245 deaths in 1995.

Interpreting Representative Numbers


▪ Percentages
▪ Percentiles
▪ Ordinal numbers
▪ Averages
In all cases, the crucial questions involve:
 Lost information.
 Misleading suggestion.
 Whether the metric, or underlying measurement, is intelligibly
mathematized.

Percentages
▪ Not (normally) an absolute number.
▪ Meaningfulness depends in part on the size of the absolute values
involved.
▪ Cannot be straightforwardly combined with other percentages, without
knowing and controlling for differences in absolute values. See the
following example:
For example:

 40% of Class 1 got an A grade and 60% of Class 2 got an A grade.


 We cannot average these and conclude that 50% of both classes combined
got an A grade.

The tax relief is for everyone who pays income taxes – and it will help our
economy immediately: 92 million Americans will keep, this year, an average of
almost $1,000 more of their own money.
- George W. Bush, State of the Union Address, 2003

 Averages can be misleading!

▪ There are about 150 million workers in the U.S. So if 92 million workers
got to keep about $1,000 extra, that would be a huge tax break for
the majority of workers.
▪ However, the word “average” changes everything!
▪ In fact, the vast majority of people got far less than $1,000.
▪ If I give one person in a group of ten $700, then the average person in
the group gets $70. But saying that the average person gets $70
completely hides how the money is distributed.

Percentages Greater than 100%


If a camp had one hundred campers last year, what do the following claims
mean?
▪ The number of campers this year is 123% of what the number was last
year.
▪ The number of campers has increased by 123%.
▪ 123% of the campers who were there last year came back.

Percentage vs. Percentile


▪ Percentages are not raw scores (unless the data happens to be out of
100), but they are at least representations of them: 70% represents
a raw score of, say, 21/30 on a quiz.
▪ Percentile, by contrast, is a term often used to quantify values by how
they compare to other values. To score in the 90th percentile on a
test, for example, is to have a raw score better than 90% of the
class. This might involve getting either more or less than 90% on the
exam, though.
▪ Again, the open question is always: what information is hidden by a
percentile representative number? What were the absolute values?
 Percentage Changes to Household Income by Decile, 1990-2000

Highest decile increase, absolute terms


 1990: $161,000
 2000: $185,000

Lowest decile increase, absolute terms


15 1990: $10,200
16 2000: $10,300

Percentage change
▪ Highest +15% Lowest +1%

Absolute change Highest


▪ $24,000 Lowest $100
So in absolute terms, the highest decile Canadian household income
increased 240 times that of the lowest decile – which is far less obvious if
we just talk about the percentage changes.

Ordinal Rankings
Often we use ordinal numbers (1st, 2nd, 3rd and so on) to rank various things
so as to make comparison easy. It is important to know what these
rankings do and do not tell us.

Other Numerical Issues


 
Meaningless quantitative comparison:

For example:
Which is greater: the mass of the sun or the
distance between the Earth and Neptune?

Pseudo-precision:

For example:
We have overseen the creation of 87,422 jobs this month.
Q: You saw the accident, how fast would you say the car was traveling? A: About
67.873 km/h

Graphical fallacies:
Misrepresentation of quantities/rates by misleading graphs or charts.
▪ Spurious correlation
▪ Unclarity
▪ Poor or incoherent choices of units/metric
▪ The chart (from CBC.ca) shows the TSX composite index, which did not
change much on this day.
▪ At its maximum it was 11718, and at the low point it was 11623. That is
only about a 0.8% change.
▪ The chart is not meant to be misleading, but without paying careful
attention to the numbers on the left, one might think the day was
something of a rollercoaster ride.
Linear Projections
▪ A professor is teaching an advanced course and 15 students show up to
the first class. At the second class, one week later, there are 20
students.
▪ The professor, assuming 5 new students show up each week, reasons
that by the thirteenth week there will be 75 students in the course.
 

▪ An arithmetically calculated average, representing the sum of the values


The mean: of a sample divided by the number of elements in the sample.
▪ Usually ‘average’ means the arithmetical mean.

17 The element in the set having the following property: half of the
elements have a greater value and half have a lesser value.
The
18 When there is an even number of data points (hence no single
median:
central value), the median is usually taken to be the mean of the two
central ones.
▪ The most frequently occurring value.
▪ 4, 4, 7, 7, 7, 23 (mode = 7)
The mode: ▪ Note: There is not always only one mode.
▪ The data set: 5, 5, 5, 6, 7, 8, 8, 8, 9 is bimodal (there are two modes:
5 and 8).

▪ The following pairs of data sets have the same mean:


▪ {0, 25, 75, 100}, {50, 50, 50, 50}
▪ If these were the grades in a seminar over two years, important
differences between the two classes would be lost in simply citing
the fact that the average was constant from year to year.
▪ They have the same median too.
▪ The existence of a mode in the second example, but not the first, would
at least indicate that something is different about the two cases.

Salary Structure at the Great Western Spatula Corporation

CEO: $200,000

Executive manager: $80,000

2 regional managers: $55,000

Marketing manager: $40,000

3 marketing assistants: $31,000

4 administrative assistants: $29,500

Factory supervisor: $29,000

12 spatula makers: $21,000

Mean salary 922,000/25 = $36,880


▪ The CEO’s salary is an outlier, dragging the mean upward: another
general worry with mean averages.
▪ “The class did fine. The average was around 70%”
 {68, 67, 74, 67, 72} median = 68
 {58, 59, 58, 69, 100} median = 59
▪ In the case of the salaries and student grades, we have all the data at
our disposal. Still, there were ways in which one or another kind of
average could fail to be representative.
▪ These issues are compounded when we are only taking a sample from
some larger set of data and using conclusions about the sample to
apply to the whole.
 
LESSON 6: PROBABILITY & STATISTICS
Representative Sampling
There is an average height of Canadians, but determining that height
involves taking a (relatively small) sample of Canadians and determining
their average height.
▪ How do we get a representative sample?
▪ Alternatively, why should we wonder whether someone else’s claims
about an average are based on a representative sample?

Two broad ways of getting an unrepresentative sample: having a biased


selection technique and getting unlucky.
▪ Biased sampling does not entail deliberate bias.
▪ Any means of gathering data that tends toward an unrepresentative
sample (relative to the property being measured).
For example:

 Using a university’s alumni donations address list for a survey on past student
satisfaction.
 An e-mail survey measuring people’s level of comfort with technology.
 A Sunday morning phone survey about church-going.
 Solicitations for voluntary responses in general.

Even without a biased sampling technique, we might just get unlucky.


Surveying height, we might happen to pick a set of people who are all taller
than average or shorter than average.
▪ How do we rule out being unlucky in this way?
 By taking the largest sample we can afford to take.
 By qualifying our confidence in our conclusions according to
the likelihood of getting unlucky with a sample of the size we
chose.

Confidence and Margins of Error


▪ When we draw (non-deductive) inferences from some set of data, we
can only ever be confident in the conclusion to a degree.
▪ Significance is a measure of the confidence we are entitled to have in
our probabilistic conclusion. It is, however, also a function of how
precise a conclusion we are trying to draw.
▪ Confidence is cheap. We can always be 100% confident that the
probability of some outcome is somewhere between 0 and 1
inclusive - at the price of imprecision.
▪ The more precise we want our conclusion to be, the more data we need
in order to have high confidence in it.
▪ So when we are told the result of some sample, we need to know both
the margin of error – that is, how precise the conclusion is – and the
degree of significance.
▪ This is why poll reports have, for example, “a 3% margin of error 19
times out of 20”. Roughly, this means that if we conducted the very
same poll repeatedly, we would have .95 (19/20) probability of
getting a result within 3% (on either side) of the reported value.
▪ We could, if we wished, convert our .95 confidence into .99 confidence,
but nothing is free; we would either have to increase the margin of
error or go out and get much more data in order to do so.
▪  So what does it mean if a poll reports a 3% difference in the popularity
of two political candidates when it has a +/-3% margin of error at
95% confidence?
 The difference is at the boundary of the margin of error.
 This does not mean that the difference is nothing.
 It does mean that we cannot be 95% confident in the difference.
▪ In short, a set of data typically permits you to be confident, to a degree,
in some statistical conclusion that is precise, to a degree.
▪ Understanding a statistical claim requires knowing both degrees. Using
fixed standards of significance is the most common way of
simplifying the interpretation of a statistical claim.
▪ Another kind of representative number: standard deviation.
▪ Roughly: the average difference between the data points and the mean.
▪ This reveals information about the distribution of the data points.
▪ Two distributions can be normal without being identical; a flatter curve
has a larger standard deviation, while a taller curve has a smaller
standard deviation.
 
 

▪ There are two broad kinds of mistake we can make in reasoning from a
confidence level: Type I errors (false positives) and Type II errors
(false negatives).
▪ In a Type I error we have a random result that looks like a significant
result.
▪ In a Type II error we have a significant result that does not get
recognized as significant (or, more strongly, is categorized as
random).
Errors in judging whether a correlation or What is independently true (or what
condition exists further investigation would reveal)

The condition does The condition


not hold does hold

Judge that the


condition does not CORRECT TIPE II ERROR
What we judge, given hold
our state of information
Judge that the
TYPE I ERROR CORRECT
condition does hold
 
 In general, we can only reduce the chances of one sort of error by (1)
improving our data or (2) increasing the odds of the other sort of error.
For example:

 Ruling out legitimate voters versus allowing illegitimate voters.


 Minimizing false accusations versus increasing unreported crime.

 Reducing unnecessary treatments versus reducing undiagnosed illness.

Probability, Risk and Intuition


▪ The goal of probability theory is to know how confident we can
reasonably be about the truth of some proposition, given an
incomplete state of information.
▪ Virtually all of us are, by nature, really bad at this.
▪ The problem is not, in general, that we are bad at arithmetic. The
problem is that we are not naturally good at recognizing how various
bits of information are relevant to the truth of a proposition.
 
Monty Hall Problem
Step 1:
There are three doors, A, B & C.
Behind two is a goat and behind one if a new car.
The car is placed at random behind one of the doors (all doors are equally
likely to contain the car).
You choose door A.
Step 2:
Monty says he will open one of the other two doors and reveal a goat.
He does so by opening door C.
Step 3:
Now you are given the choice of picking again.
You can keep A, or switch to B.
What would be rational for you to do?
Consider the reasoning:
 Now it is down to two doors
 Behind one is a goat and behind the other is a car
 The odds are 50-50
 I do not gain anything by switching my picks; the odds are the same
no matter what
 So I might as well stick with door A.
What would you do?
Step 4:
Many people find this reasoning absolutely clear and compelling.
But it is not.
 What are the odds that you got the car with your first pick? (before
Monty opened any doors, that is)? -> 1/3
 So what are the odds that you missed the car with your first pick? ->
2/3
Step 5:
You will lose the car by switching doors if and only if you got the car with
your first pick, the odds of which were 1/3.
But only 2 things can happen: Either you get the car or you do not get the
car.
So if you lose the car 1 time in 3 by switching, that means you win the car
2 times in 3 by switching.
So you double your odds by switching your pick.
With door A your odds of winning are 1/3; with door B they are 2/3.
Conclusion:
The math is not complicated: 1- 1/3=2/3
What is tricky is being sensitive to the subtle ways that our state of
information can be changed.

Basics of Probability
▪ Probabilities are quantified on a scale from 0 to 1.
▪ A necessary event has a probability of 1; an impossible event has a
probability of 0.
▪ Events that might or might not occur have some probability in between.
The chances of a randomly flipped fair coin coming up tails is .5, for
example.
▪ The probability of an event e can be written as ‘P(e)’.
▪ We will use "¬e" to mean ‘not-e’; that is, the event does not occur.
 
Two Basic Laws of Probability

▪ 0 ≤ P(e) ≤ 1

The probability of any event has a value somewhere from 0 to 1, inclusive.

19 Where S is the set of all possible outcomes, P(S) = 1.

▪ Think of this as telling us that, necessarily, something or other happens.


Alternatively, it says that there are no outcomes outside S.
▪ If S is not well-defined, then any probabilistic calculations you might
perform using S are suspect and perhaps meaningless.
▪ Rule (2) makes it possible to perform very useful reasoning based on
what will not occur.
That is:

P(e) = 1 – P(¬e)

The probability that e occurs is 1 minus the probability that it does not
occur.

For most applications, the probability of an event is given by:

number of relevant outcomes

total number of possible outcomes


▪ (It is enough to note that infinite domains need, and get, different
treatment.)
 
For example:

 On a single throw of a fair six-sided die, what is the probability of rolling a 3?

Number of outcomes that count as being a 3

Number of possible outcomes

= 1/6 ≈ .167
▪  
▪ On a single throw of a fair six-sided die, what is the probability of rolling an even number?
Outcomes that count as being an even number

Number of possible outcomes


▪ = 3/6 = .5

Complex Events (Considering More than One Event at a Time)


▪ For disjoint events (at least one event occurring) we use U to mean, roughly, ‘or’.
▪ For conjoint events (all the specified events occurring) we use ∩ to mean, roughly, ‘and’.
P(A∪B) = P(A) + P(B) – P(A∩B)

The probability that either A or B occurs is the probability that A occurs plus the probability
that B occurs, minus the probability that both A and B occur.

▪ Think of the simpler case in which A and B are mutually exclusive. That
is, they cannot both occur. Then P(A∩B), the probability that they
occur together, is 0. So the last part of the equation can be dropped
for this special case. We end up with:

P(A∪B) = P(A) + P(B)

 The outcome (A∪B) occurs just in case either one of A or B occurs.


So P(A∪B) is just the probability of A plus the probability of B.
 Adding the probabilities is not only correct, but can be made
intuitive. Which is likelier: that A occurs, or that any one of A, B or C
occurs?

▪ In the more complicated case where A and B might occur together, we


need the whole formula P(A∪B) = P(A) + P(B) – P(A∩B).
▪ All that the last term means is that we should not count outcomes twice.
If A and B are not mutually exclusive, then some A-outcomes are
also B-outcomes. Starting with P(A), if we simply add P(B) we are
counting some A-outcomes a second time, namely those that are
also B-outcomes.
▪ So we subtract those overlapping cases, P(A∩B), to avoid this.
 

More complex cases follow the same pattern:

P(A∪B∪C) = P(A) + P(B) + P(C) – P(A∩B)


– P(B∩C) – P(A∩C) + P(A∩B∩C)

The probability of both events occurring is the product of their probabilities.


There are two broad kinds of cases:
▪ Independent A and B; whether A occurs is not affected by whether B
occurs.

P(A∩B) = P(A) x P(B)


2. Dependent A and B; the probability that A occurs is affected by B’s
occurring.
P(A∩B) = P(A|B) x P(B)
‘P(A|B)’ is a conditional probability: the probability of A given B.

Plausibly, whether Venus is aligned with Neptune is independent of whether Ted


eventually suffers from lung cancer.
▪ So,
▪ P(A∩B) = P(A) x P(B)We just multiply the independent probabilities of these two
events.

By contrast, suppose we want to know the probability of the scenario in which Ted
smokes cigarettes and Ted eventually suffers from lung cancer.
 Probability that Ted suffers from lung cancer ≈ .0007
 Probability that Ted smokes ≈ .22
 If we treated these as independent events, we would just multiply the
probabilities:
 P(L∩S) = P(L) x P(S) = .0007 x .22 = .00015…or about 15 in 100,000.
 But this overlooks something important: the probabilities of having lung cancer
and of being a smoker are dependent upon each other. If one smokes, one is
much more likely to get lung cancer; and if one gets lung cancer, one is much
more likely to have smoked.

▪  The basic idea goes back to the truth-conditions of conditional


statements.
For example:

 Suppose we want to know whether S is both a fox and a mammal.

▪ Does it make a difference to know that if S is a fox then S is a mammal?

▪ This is similar to the probabilistic case where [if P then it is more


likely/less likely that Q].
▪ This is relevant to determining whether both P and Q.

▪ So we would need to find out just how much the probability of having
lung cancer increases if one smokes, or vice versa, in order to
answer the question.

▪ Notice that the dependence relation is not simply cause-and-effect.


Smoking is every bit as statistically dependent on lung cancer as the
other way around! Dependence and conditional probability are a
matter of related probabilities, not necessarily whether one factor
causes another (though of course that is one way for the
probabilities to be related).

Conditional Probability
The chances that an event will occur given that another event occurs.

P(B|A) = P(A∩B) ÷ P(A)


P(A|B) = P(A∩B) ÷ P(B)

▪ Hence the likelihood of conjoint dependent events involves conditional


probability.
 
Dependent conjoint probability:

P(A∩B) = P(A|B) x P(B)P(A∩B) = P(B|A) x P(A)

 We multiply the probability of A by the probability of B given A (or


the probability of B by the probability of A given B; it comes out the
same thing).
 The likelier it is that B occurs if A occurs, the closer P(A∩B) is to just
being P(A).
 The likelier it is that B does not occur if A occurs, the closer P(A∩B)
is to zero.

▪  Conditional probabilities already factored into the lung cancer case since
the smoking rates and lung cancer rates for Canadian males were
chosen.
▪ So those (approximate) numbers were really the probabilities of having
lung cancer or of smoking, given that Ted is an adult Canadian
male.
▪ One of the most important and common applications of probability is to
the phenomenon of risk. How should we understand claims about
riskiness?
▪ Conditional probabilities in action.
 
“Two hundred and seventy-seven U.S. soldiers have now died in Iraq,
which means that, statistically speaking, U.S. soldiers have less of a
chance of dying from all causes in Iraq than citizens have of being
murdered in California…which is roughly the same geographical size. The
most recent statistics indicate California has more than 2,300 homicides
each year, which means about 6.6 murders each day. Meanwhile, U.S.
troops have been in Iraq for 160 days, which means they are incurring
about 1.7, including illness and accidents, each day.”

▪ There are roughly 40,000,000 Americans in California.


▪ There were roughly 150,000 Americans in Iraq at that time.
▪ .00575% ofCalifornians are murdered each year.
▪ .42% annual death rate for Americans in Iraq.
▪ In other words, the odds of an American soldier dying in Iraq were
roughly 70 times as great as the odds of a Californian being
murdered at that time.

Hume: “Admittedly it was a crude comparison. But it was illustrative of


something.”

You might also like