0% found this document useful (0 votes)
9 views30 pages

L13 Bayesian Methods

Uploaded by

suriaya113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views30 pages

L13 Bayesian Methods

Uploaded by

suriaya113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

Bayesian Learning

• Bayes Theorem
• MAP, ML hypotheses
• MAP learners
• Minimum description length principle
• Bayes optimal classifier
• Naïve Bayes learner
• Bayesian belief networks

CS 8751 ML & KDD Bayesian Methods 1


Two Roles for Bayesian Methods
Provide practical learning algorithms:
• Naïve Bayes learning
• Bayesian belief network learning
• Combine prior knowledge (prior probabilities)
with observed data
Requires prior probabilities:
• Provides useful conceptual framework:
• Provides “gold standard” for evaluating other
learning algorithms
• Additional insight into Occam’s razor
CS 8751 ML & KDD Bayesian Methods 2
Bayes Theorem
P ( D | h) P ( h)
P(h | D) 
P( D)
• P(h) = prior probability of hypothesis h
• P(D) = prior probability of training data D
• P(h|D) = probability of h given D
• P(D|h) = probability of D given h

CS 8751 ML & KDD Bayesian Methods 3


Choosing Hypotheses
P ( D | h) P ( h)
P(h | D) 
P( D)
Generally want the most probable hypothesis given the
training data
Maximum a posteriori hypothesis hMAP:
hMAP arg max P (h | D)
hH

P ( D | h) P ( h)
arg max
hH P( D)
arg max P ( D | h) P (h)
hH

If we assume P(hi)=P(hj) then can further simplify, and


choose the Maximum likelihood (ML) hypothesis
hML arg max P ( D | hi )
hi H

CS 8751 ML & KDD Bayesian Methods 4


Bayes Theorem
Does patient have cancer or not?
A patient takes a lab test and the result comes back positive.
The test returns a correct positive result in only 98% of the
cases in which the disease is actually present, and a correct
negative result in only 97% of the cases in which the
disease is not present. Furthermore, 0.8% of the entire
population have this cancer.
P(cancer) = P(cancer) =
P(+|cancer) = P(-|cancer) =
P(+|cancer) = P(-|cancer) =

P(cancer|+) =
P(cancer|+) =
CS 8751 ML & KDD Bayesian Methods 5
Some Formulas for Probabilities
• Product rule: probability P(A  B) of a conjunction
of two events A and B:
P(A  B) = P(A|B)P(B) = P(B|A)P(A)
• Sum rule: probability of disjunction of two events
A and B:
P(A  B) = P(A) + P(B) - P(A  B)
• Theorem of total probability: if events A1,…,An
are mutually exclusive with 
n
P ( Ai ) 1
i 1
, then
n
P ( B)  P( B | Ai ) P( Ai )
i 1

CS 8751 ML & KDD Bayesian Methods 6


Brute Force MAP Hypothesis Learner
1. For each hypothesis h in H, calculate the posterior
probability
P ( D | h) P ( h)
P(h | D) 
P( D)

2. Output the hypothesis hMAP with the highest


posterior probability
hMAP arg max P (h | D)
hH

CS 8751 ML & KDD Bayesian Methods 7


Relation to Concept Learning
Consider our usual concept learning task
• instance space X, hypothesis space H, training
examples D
• consider the FindS learning algorithm (outputs
most specific hypothesis from the version space
VSH,D)

What would Bayes rule produce as the MAP


hypothesis?
Does FindS output a MAP hypothesis?
CS 8751 ML & KDD Bayesian Methods 8
Relation to Concept Learning
Assume fixed set of instances (x1,…,xm)
Assume D is the set of classifications
D = (c(x1),…,c(xm))
Choose P(D|h):
• P(D|h) = 1 if h consistent with D
• P(D|h) = 0 otherwise
Choose P(h) to be uniform distribution
• P(h) = 1/|H| for all h in H
Then
 VS1H,D if h is consistent with D
P (h | D) 
 0 otherwise
CS 8751 ML & KDD Bayesian Methods 9
Learning a Real Valued Function
y f
e hML

x
Consider any real-valued target function f
Training examples (xi,di), where di is noisy training value
• di = f(xi) + ei
• ei is random variable (noise) drawn independently for each
xi according to some Gaussian distribution with mean = 0
Then the maximum likelihood hypothesis hML is the one that
minimizes the summof squared errors:
hML arg min  (d i  h( xi )) 2
hH
i 1
CS 8751 ML & KDD Bayesian Methods 10
Learning a Real Valued Function
hML arg max p ( D | h)
hH
m
arg max  p (d i | h)
hH
i 1

arg max 
m
1
e
  
1 di  h ( xi )
2 σ
2

hH
2πσ 2
i 1

Maximize natural log of this instead ...


2
1 1  d  h( xi ) 
hML arg max ln   i 
hH
2πσ 2 2 σ 
2
1  d i  h( xi ) 
arg max   
hH 2 σ 
arg max  d i  h( xi ) 
2
hH

arg mind i  h( xi ) 
2
hH
CS 8751 ML & KDD Bayesian Methods 11
Minimum Description Length Principle
Occam’s razor: prefer the shortest hypothesis
MDL: prefer the hypothesis h that minimizes
hMDL arg min LC1 (h)  LC 2 ( D | h)
hH
where LC(x) is the description length of x under
encoding C
Example:
• H = decision trees, D = training data labels
• LC1(h) is # bits to describe tree h
• LC2(D|h) is #bits to describe D given h
– Note LC2 (D|h) = 0 if examples classified perfectly by h.
Need only describe exceptions
• Hence hMDL trades off tree size for training errors
CS 8751 ML & KDD Bayesian Methods 12
Minimum Description Length Principle
hMAP arg max P ( D | h) P (h)
hH

arg max log 2 P ( D | h)  log 2 P (h)


hH

arg min  log 2 P ( D | h)  log 2 P (h) (1)


hH
Interesting fact from information theory:
The optimal (shortest expected length) code for an
event with probability p is log2p bits.
So interpret (1):
-log2P(h) is the length of h under optimal code
-log2P(D|h) is length of D given h in optimal code
 prefer the hypothesis that minimizes
length(h)+length(misclassifications)
CS 8751 ML & KDD Bayesian Methods 13
Bayes Optimal Classifier
Bayes optimal classification
arg max  P(v j | hi ) P(hi | D)
v j V
hi H
Example:
P(h1|D)=.4, P(-|h1)=0, P(+|h1)=1
P(h2|D)=.3, P(-|h2)=1, P(+|h2)=0
P(h3|D)=.3, P(-|h3)=1, P(+|h3)=0
therefore
 P( | hi ) P(hi | D) .4
hi H

 P( | h ) P(h | D) .6


hi H
i i

and arg max


v j V
 P (v
hi H
j | hi ) P(hi | D)  -
CS 8751 ML & KDD Bayesian Methods 14
Gibbs Classifier
Bayes optimal classifier provides best result, but can be
expensive if many hypotheses.
Gibbs algorithm:
1. Choose one hypothesis at random, according to P(h|D)
2. Use this to classify new instance
Surprising fact: assume target concepts are drawn at random
from H according to priors on H. Then:
E[errorGibbs]  2E[errorBayesOptimal]
Suppose correct, uniform prior distribution over H, then
• Pick any hypothesis from VS, with uniform probability
• Its expected error no worse than twice Bayes optimal

CS 8751 ML & KDD Bayesian Methods 15


Naïve Bayes Classifier
Along with decision trees, neural networks, nearest
neighor, one of the most practical learning
methods.
When to use
• Moderate or large training set available
• Attributes that describe instances are conditionally
independent given classification
Successful applications:
• Diagnosis
• Classifying text documents
CS 8751 ML & KDD Bayesian Methods 16
Naïve Bayes Classifier
Assume target function f: XV, where each instance
x described by attributed (a1,a2,…,an).
Most probable value of f(x) is:
vMAP arg max P (v j | a1 , a2 ,..., an )
v j V

P (a1 , a2 ,..., an | v j ) P (v j )
arg max
v j V P (a1 , a2 ,..., an )
arg max P (a1 , a2 ,..., an | v j ) P (v j )
v j V
Naïve Bayes assumption:
P (a1 , a2 ,..., an | v j )  P (ai | v j )
i
which gives
Naïve Bayes classifier: v NB arg max P (v j ) P (ai | v j )
v V j
i
CS 8751 ML & KDD Bayesian Methods 17
Naïve Bayes Algorithm
Naive_Bayes_Learn(examples)
For each target value v j
Pˆ (v j )  estimate P(v j )
For each attribute value ai of each attribute a
Pˆ (a |v )  estimate P(a |v )
i j i j

Classify_New_Instance( x)
v NB arg max Pˆ (v j )  Pˆ (ai|v j )
v j V
a i x

CS 8751 ML & KDD Bayesian Methods 18


Naïve Bayes Example
Consider CoolCar again and new instance
(Color=Blue,Type=SUV,Doors=2,Tires=WhiteW)
Want to compute
v NB arg max P (v j ) P (ai | v j )
v j V
i

P(+)*P(Blue|+)*P(SUV|+)*P(2|+)*P(WhiteW|+)=
5/14 * 1/5 * 2/5 * 4/5 * 3/5 = 0.0137
P(-)*P(Blue|-)*P(SUV|-)*P(2|-)*P(WhiteW|-)=
9/14 * 3/9 * 4/9 * 3/9 * 3/9 = 0.0106

CS 8751 ML & KDD Bayesian Methods 19


Naïve Bayes Subtleties
1. Conditional independence assumption is often
violated
P (a1 , a2 ,..., an | v j )  P (ai | v j )
i
• … but it works surprisingly well anyway. Note
that you do not need estimated posteriors to be
correct; need only that
arg max Pˆ (v j ) Pˆ (ai | v j ) arg max P (v j ) P (a1 ,..., an | v j )
v j V v j V
i
• see Domingos & Pazzani (1996) for analysis
• Naïve Bayes posteriors often unrealistically close
to 1 or 0
CS 8751 ML & KDD Bayesian Methods 20
Naïve Bayes Subtleties
2. What if none of the training instances with target
value vj have attribute value ai? Then
Pˆ (ai | v j ) 0, and ...
Pˆ (v j ) Pˆ (ai | v j ) 0
Typical solution
i
is Bayesian estimate for Pˆ (ai | v j )
ˆ nc  mp
P (ai | v j ) 
nm
• n is number of training examples for which v=vj
• nc is number of examples for which v=vj and a=ai
ˆ
• p is prior estimate for P (ai | v j )
• m is weight given to prior (i.e., number of “virtual”
examples)
CS 8751 ML & KDD Bayesian Methods 21
Bayesian Belief Networks
Interesting because
• Naïve Bayes assumption of conditional
independence is too restrictive
• But it is intractable without some such
assumptions…
• Bayesian belief networks describe conditional
independence among subsets of variables
• allows combing prior knowledge about
(in)dependence among variables with observed
training data
• (also called Bayes Nets)
CS 8751 ML & KDD Bayesian Methods 22
Conditional Independence
Definition: X is conditionally independent of Y given
Z if the probability distribution governing X is
independent of the value of Y given the value of Z;
that is, if
(xi , y j , z k ) P ( X  xi | Y y j , Z  z k ) P ( X  xi | Z  z k )
more compactly we write
P(X|Y,Z) = P(X|Z)
Example: Thunder is conditionally independent of
Rain given Lightning
P(Thunder|Rain,Lightning)=P(Thunder|Lightning)
Naïve Bayes uses conditional ind. to justify
P(X,Y|Z)=P(X|Y,Z)P(Y|Z)
=P(X|Z)P(Y|Z)
CS 8751 ML & KDD Bayesian Methods 23
Bayesian Belief Network
Storm BusTourGroup

S,B S,¬B ¬S,B ¬S,¬B


C 0.4 0.1 0.8 0.2
¬C 0.6 0.9 0.2 0.8
Lightning Campfire
Campfire

Thunder ForestFire

Network represents a set of conditional independence assumptions


• Each node is asserted to be conditionally independent of its
nondescendants, given its immediate predecessors
• Directed acyclic graph
CS 8751 ML & KDD Bayesian Methods 24
Bayesian Belief Network
• Represents joint probability distribution over all
variables
• e.g., P(Storm,BusTourGroup,…,ForestFire)
• in general,
n
P ( y1 ,..., yn )  P ( yi | Parents (Yi ))
i 1

where Parents(Yi) denotes immediate


predecessors of Yi in graph
• so, joint distribution is fully defined by graph, plus
the P(yi|Parents(Yi))
CS 8751 ML & KDD Bayesian Methods 25
Inference in Bayesian Networks
How can one infer the (probabilities of) values of
one or more network variables, given observed
values of others?
• Bayes net contains all information needed
• If only one variable with unknown value, easy to
infer it
• In general case, problem is NP hard
In practice, can succeed in many cases
• Exact inference methods work well for some
network structures
• Monte Carlo methods “simulate” the network
randomly to calculate approximate solutions
CS 8751 ML & KDD Bayesian Methods 26
Learning of Bayesian Networks
Several variants of this learning task
• Network structure might be known or unknown
• Training examples might provide values of all
network variables, or just some
If structure known and observe all variables
• Then it is easy as training a Naïve Bayes classifier

CS 8751 ML & KDD Bayesian Methods 27


Learning Bayes Net
Suppose structure known, variables partially
observable
e.g., observe ForestFire, Storm, BusTourGroup,
Thunder, but not Lightning, Campfire, …
• Similar to training neural network with hidden
units
• In fact, can learn network conditional probability
tables using gradient ascent!
• Converge to network h that (locally) maximizes
P(D|h)

CS 8751 ML & KDD Bayesian Methods 28


Gradient Ascent for Bayes Nets
Let wijk denote one entry in the conditional
probability table for variable Yi in the network
wijk =P(Yi=yij|Parents(Yi)=the list uik of values)
e.g., if Yi = Campfire, then uik might be (Storm=T,
BusTourGroup=F)
Perform gradient ascent by repeatedly
1. Update all wijk using
Ph ( yij ,training
uik | d ) data D
wijk  wijk η 
d D wijk

2. Then renormalize the wijk to assure


 w 1 , 0 wijk 1
j ijk
CS 8751 ML & KDD Bayesian Methods 29
Summary of Bayes Belief Networks
• Combine prior knowledge with observed data
• Impact of prior knowledge (when correct!) is to
lower the sample complexity
• Active research area
– Extend from Boolean to real-valued variables
– Parameterized distributions instead of tables
– Extend to first-order instead of propositional
systems
– More effective inference methods

CS 8751 ML & KDD Bayesian Methods 30

You might also like