Lecture No. 03
Lecture No. 03
knowledge of the classeswith new evidence gathered from data. The use of the
Bayes theorem for solving classification problems will be explained, followed
by a description of two implementations of Bayesian classifiers: naiVe Bayes
and the Bayesian belief network.
This question can be answered by using the well-known Bayes theorem. For
completeness, we begin with some basic definitions from probability theory.
Readerswho are unfamiliar with conceptsin probability may refer to Appendix
C for a brief review of this topic.
Let X and Y be a pair of random variables. Their joint probability, P(X :
r,Y : g), refers to the probability that variable X will take on the value
r and variable Y will take on the value g. A conditional probability is the
probability that a random variable will take on a particular value given that the
outcome for another random variable is known. For example, the conditional
probability P(Y :UlX: r) refers to the probability that the variable Y will
take on the value g, given that the variable X is observedto have the value r.
The joint and conditional probabilities for X and Y are related in the following
way:
P(x,Y) : P(Ylx) x P(X) : P(XIY) x P(Y). (5.e)
Rearranging the last two expressionsin Equation 5.9 leads to the following
formula, known as the Bayes theorem:
P(xlY)P(Y)
P(Ylx): (5.10)
P(X)
The Bayes theorem can be used to solve the prediction problem stated
at the beginning of this section. For notational convenience, let X be the
random variable that represents the team hosting the match and Y be the
random variable that representsthe winner of the match. Both X and Y can
5.3 Bayesian Classifiers 229
take on values from the set {0,1}. We can summarize the information given
in the problem as follows:
P (X : rl Y :1 ) x P( Y : 1)
P(Y:llx : 1) :
P(X :1)
P(X :LlY :1) x P(Y : 1)
P ( X : ! , Y : 1 )+ P ( X : 1 , Y : 0 )
P ( X : L I Y: 1 ) x P ( Y : 1 )
P (X : IIY :I) P( Y : 1)* P( X : r lY :O) P( Y : 0)
0.75x 0.35
0.75x0.35+0.3x0.65
: 0.5738.
where the law of total probability (seeEquation C.5 on page 722) was applied
in the secondline. Furthermore, P(Y :OlX : 1) : t - P(Y : llx - 1) :
0.4262. Since P(Y : llx : 1) > P(Y : OlX : 1), Team t has a better
chance than Team 0 of winning the next match.
Before describing how the Bayes theorem can be used for classification, let
us formalize the classification problem from a statistical perspective. Let X
denote the attribute set and Y denote the class variable. If the class variable
has a non-deterministic relationship with the attributes, then we can treat
X and Y as random variables and capture their relationship probabilistically
using P(YIX). This conditional probability is also known as the posterior
probability for Y, as opposed to its prior probability, P(Y).
During the training phase, we need to learn the posterior probabilities
P(ylX) for every combination of X and Y based on information gathered
from the training data. By knowing these probabilities, a test record X' can
be classified by finding the class Yt that maximizes the posterior probability,
230 Chapter 5 Classification: Alternative Techniques
a."'
"""""od"
Figure5.9.Training
setforpredicting problem.
theloandefault
Suppose we are given a test record with the following attribute set: X :
(HoneOwner: No, Marital Status : Married, Annual Income : $120K). To
classify the record, we need to compute the posterior probabilities P(YeslX)
and P(NolX) basedon information available in the training data. If P(YeslX) >
P(NolX), then the record is classified as Yes; otherwise, it is classifiedas No.
Estimating the posterior probabilities accurately for every possible combi-
nation of class labiel and attribute value is a difficult problem because it re-
quires a very large training set, even for a moderate number of attributes. The
Bayes theorem is useful becauseit allows us to expressthe posterior probabil-
ity in terms of the prior probability P(f), the class-conditional probability
P(X|Y), and the evidence,P(X):
P( xlY) x P( Y)
P(ylx): (5.11)
P(x)
When comparing the posterior probabilities for different values of Y, the de-
nominator term, P(X), is always constant, and thus, can be ignored. The
5.3 BayesianClassifiers 23L
prior probability P(f) can be easily estimated from the training set by com-
puting the fraction of training records that belong to each class. To estimate
the class-conditionalprobabilities P(Xlf), we present two implementations of
Bayesian classification methods: the naiVe Bayes classifier and the Bayesian
belief network. These implementations are described in Sections 5.3.3 and
5.3.5,respectively.
&
P ( X I Y: a ) : f r l x S v : 11, (5.12)
i:l
Conditional Independence
Before delving into the details of how a naive Bayes classifier works, let us
examine the notion of conditional independence. Let X, Y, and Z denote
three sets of random variables. The variables in X are said to be conditionally
independent of Y, given Z, 1f the following condition holds:
P\X,Y,Z)
P(x,Ylz) :
P(Z)
P (
-FEqX .Y , Z ) , . P ( Y . Z )
^
P(z)
P(xlY,z) x P(vlz)
P(xlz)x P(Ylz), (5.14)
where Equation 5.13 was used to obtain the last line of Equation 5.14.
Since P(X) is fixed for every Y, it is sufficient to choosethe class that maxi-
mizes the numerator term, p(V)l[i:tP(X,lY). In the next two subsections,
we describe several approaches for estimating the conditional probabilities
P(X,lY) for categorical and continuous attributes.
There are two ways to estimate the class-conditional probabilities for contin-
uous attributes in naive Bayes classifiers:
1. We can discretize each continuous attribute and then replace the con-
tinuous attribute value with its corresponding discrete interval. This
approach transforms the continuous attributes into ordinal attributes.
The conditional probability P(X,IY : U) is estimated by computing
the fraction of training records belonging to class g that falls within the
corresponding interval for Xi. The estimation error depends on the dis-
cretization strategy (as described in Section 2.3.6 on page 57), as well as
the number of discrete intervals. If the number of intervals is too large,
there are too few training records in each interval to provide a reliable
estimate for P(XrlY). On the other hand, if the number of intervals
is too small, then some intervals may aggregate records from different
classesand we may miss the correct decision boundary.
., tt:j)2
_(tt_
zofi
P(Xi: r,ilY : y) : -)- exp (5.16)
1/2troii
r 2 5 + 1 0 0 + 7 0 + . . . + 7 5:
110
(
, ( 1 2 5 1 1 0 ) 2+ ( 1 0 0- 1 1 0 ) 2+ . . . + ( 7 5- 1 1 0 ) 2
-
:2975
7(6)
s: t/2975:54.54.
234 Chapter 5 Classification: Alternative Techniques
Given a test record with taxable income equal to $120K, we can compute
its class-conditionalprobability as follows:
:
P(rncome=12olNo) : 0.0072.
6h.b4)"*p-95#f
f rtle
P ( * o< X ; I r i * e l y : y r 1 : I fqo;ttij,oij)dxi
J:r'
= f (rt; ttti,o,ii) x e. (5.17)
Consider the data set shown in Figure 5.10(a). We can compute the class-
conditional probability for each categorical attribute, along with the sample
mean and variance for the continuous attribute using the methodology de-
scribed in the previous subsections. These probabilities are summarized in
Figure 5.10(b).
To predict the class label of a test record ;q : (HomeOwner:No, Marital
Status : Married, Income : $120K), we need to compute the posterior prob-
abilities P(UolX) and P(YeslX). Recall from our earlier discussionthat these
posterior probabilities can be estimated by computing the product between
the prior probability P(Y) and the class-conditionalprobabilitiesll P(X,lY),
which corresponds to the numerator of the right-hand side term in Equation
5.15.
The prior probabilities of each class can be estimated by calculating the
fraction of training records that belong to each class. Since there are three
records that belong to the classYes and sevenrecords that belong to the class
5.3 Bayesian Classifiers 235
P(HomeOwner=YeslNo) = 317
P(HomeOwner=NolNo) = 4fl
P(HomeOwner=YeslYes) =0
P(HomeOwner=NolYes) =1
P(Marital = 2n
Status=SinglelNo)
Yes 125K
No)= 1/7
P(MaritalStatus=Divorcedl
No 100K = 4t7
P(MaritalStatus=MarriedlNo)
No 70K = 2/3
P(MaritalStatus=SinglelYes)
Yes 120K = 1/3
P(MaritalStatus=DivorcedlYes)
=0
P(MaritalStatus=MarriedlYes)
No 95K
No 60K ForAnnualIncome:
Yes 220K lf class=No:samplemean=110
No 85K samplevariance=2975
No 75K samplemedn=90
lf class=Yes:
samplevariance=2S
No 90K
(a) (b)
Figure
5.10.Thenalve
Bayes
classifier problem.
fortheloanclassification
No, P(Yes) :0.3 and P(no) :0.7. Using the information provided in Figure
5.10(b), the class-conditionalprobabilities can be computed as follows:
The naive Bayes classifier will not be able to classify the record. This prob-
lem can be addressed by using the m-estimate approach for estimating the
conditional probabilities :
!!
P (r,l a- ) : ?s! , (5.18)
n+Tn
where n is the total number of instances from class 3ry,n" is the number of
training examples from class gi that take on the value ri, rrl is a parameter
known as the equivalent sample size, and p is a user-specifiedparameter. If
there is no training set available (i.e., n:0), then P(rilyi) : p. Therefore
p can be regarded as the prior probability of observing the attribute value
ri among records with class 97. The equivalent sample size determines the
tradeoff between the prior probability p and the observed probability n.f n.
In the example given in the previous section, the conditional probability
P(Status : MarriedlYes) : 0 because none of the training records for the
class has the particular attribute value. Using the m-estimate approach with
m:3 and p :113, the conditional probability is no longer zero:
If we assumep : If 3 for all attributes of class Yes and p : 213 for all
attributes of class No. then
o They are robust to isolated noise points becausesuch points are averaged
out when estimating conditional probabilities from data. Naive Bayes
classifiers can also handle missing values by ignoring the example during
model building and classification.
P ( A : O l y : 0 ) P ( B : O l y : O ) P ( Y: 0 )
P(Y:0lA:0, B : 0) :
P(A:0, B : 0)
0 . 1 6x P ( Y : 0 )
P(A:0, B : 0)'
P ( A : O l y : I ) P ( B : O l y: l ) P ( Y : 1 )
P(Y : IlA:0,8 : 0) :
P(A:0, B : 0)
0 . 3 6x P ( Y 1 )
:
P(A:0, B : 0)'
If P(Y - 0) : P(Y : 1), then the naiVe Bayes classifier would assign
the record to class 1. However, the truth is,
P ( A : 0 ,B : O l Y : 0 ) : P ( A : 0l)' : 0) : 0.4,
P ( A : 0 , 8 : O l Y : 0 ) P ( Y: 0 )
P(Y :0lA:0, B : 0) :
P(A:0,8:0)
0 . 4x P ( Y : 0 )
P(A :0,8 : 0 )'
which is larger than that for Y : 1. The record should have been
classifiedas class 0.
\, Crocodile
\
\
\
\
\
\
\
\
\
\
\
\
\
5 10 tu
Length,*
Figure5.11.Comparing
thelikelihood of a crocodile
functions andanalligator.
:
P(Xlcrocodile) ( 5.1e)
#"""0 1
:
P(Xlnrri.gator) ;(ry)'l (5.20)
#""*o[ "(ry)')
Figure 5.11 shows a comparison between the class-conditionalprobabilities
for a crocodile and an alligator. Assuming that their prior probabilities are
the same, the ideal decision boundary is located at some length i such that
(ft-rb\2
:\ /i-r2\2
\ , / , /'
which can be solved to yield f : 13.5. The decision boundary for this example
is located halfway between the two means. r
24O Chapter 5 Classification: Alternative Techniques
Figure probabilistic
5.12.Representing relationships
usingdirected graphs.
acyclic
When the prior probabilities are different, the decision boundary shifts
toward the class with lower prior probability (see Exercise 10 on page 319).
Furthermore, the minimum error rate attainable by any classifieron the given
data can also be computed. The ideal decision boundary in the preceding
example classifies all creatures whose lengths are less than ft as alligators and
those whose lengths are greater than 0 as crocodiles. The error rate of the
classifier is given by the sum of the area under the posterior probability curve
for crocodiles (from length 0 to i) and the area under the posterior probability
curve for alligators (from f to oo):
Model Representation
1. If a node X does not have any parents, then the table contains only the
prior probability P(X).
2. If a node X has only one parent, Y, then the table contains the condi-
tional probability P(XIY).
3. If a node X has multiple parents, {Yt,Yz, . . . ,Yn}, then the table contains
the conditionalprobability P(XlY,Yz,. . ., Yr.).
242 Chapter 5 Classification: Alternative Techniques
Hb=Yes
HD=Yes
D=Healthy 0.2
E=Yes D=Unhealthy 0.85
D=Heatthy 0.25
E=Yes 0.45
D=Unhealthl
E=No
D=Healthy 0.55
E=No
0.75 CP=Yes
D=Unhealth!
HD=Yes
Hb=Yes
0.8
HD=Yes
Hh=Nn
0.6
HD=No
0.4
Hb=Yes
HD=No
Hb=No
0.1
Figure5.13.A Bayesian
beliefnetwork
fordetecting
heartdisease in patients.
andheartburn
Model Building
Model building in Bayesiannetworks involves two steps: (1) creating the struc-
ture of the network, and (2) estimating the probability values in the tables
associatedwith each node. The network topology can be obtained by encod-
ing the subjective knowledge of domain experts. Algorithm 5.3 presents a
systematic procedure for inducing the topology of a Bayesian network.
Example 5.4. Consider the variables shown in Figure 5.13. After performing
Step 1, Iet us assume that the variables are ordered in the following way:
(E,D,HD,H\,CP,BP). From Steps 2 to 7, starting with variable D, we
obtain the following conditional probabilities:
. P(DIE) is simplified to P(D).
Supposewe are interested in using the BBN shown in Figure 5.13 to diagnose
whether a person has heart disease. The following cases illustrate how the
diagnosis can be made under different scenarios.
Without any prior information, we can determine whether the person is likely
to have heart disease by computing the prior probabilities P(HD : Yes) and
P(HD: No). To simplify the notation, let a € {Yes,No} denote the binary
values of Exercise and B e {Healthy,Unhealthy} denote the binary values
of Diet.
P(HD:ves) : ttP(Hn
: Y e s l E : ( t , D : P ) P ( E: a , D : 0 )
d13
: ttp(uo : yesl,O
: (t,D: ilP(E : a)P(D: g)
aR
0 . 2 5x 0 . 7x 0 . 2 5+ 0 . 4 5x 0 . 7x 0 . 7 5 + 0 . 5 5x 0 . 3x 0 . 2 5
+ 0.75x 0.3 x 0.75
0.49.
Since P(HD - no) - 1 - P(ttO : yes) : 0.51, the person has a slightly higher
chance of not getting the disease.
5.3 Bayesian Classifiers 245
If the person has high blood pressure)we can make a diagnosis about heart
disease by comparing the posterior probabilities, P(HD : YeslBP : High)
against P(ttO : Nolnt : High). To do this, we must compute P(Be : High):
P (n e:H i g h ) : frl n r
: HighlHD:7) p( HD
:7)
where 7 € {Yes, No}. Therefore, the posterior probability the person has heart
diseaseis
Supposewe are told that the person exercisesregularly and eats a healthy diet.
How does the new information affect our diagnosis? With the new information,
the posterior probability that the person has heart diseaseis
The model therefore suggests that eating healthily and exercising regularly
may reduce a person's risk of getting heart disease.
Characteristics of BBN
3 . Bayesian networks are well suited to dealing with incomplete data. In-
stanceswith missing attributes can be handled by summing or integrat-
ing the probabilities over all possible values of the attribute.
X, y = make_classification(
n_features=6,
n_classes=3,
n_samples=800,
n_informative=2,
random_state=1,
n_clusters_per_class=1
)
# Model training
model.fit(X_train, y_train)
# Predict Output
predicted = model.predict([X_test[6]])
Actual Value: 0
Predicted Value: 0
y_pred = model.predict(X_test)
accuray = accuracy_score(y_pred, y_test)
f1 = f1_score(y_pred, y_test, average="weighted")
https://colab.research.google.com/drive/1StIyHH_OUAj9A7SMm6PXeBZxMBaS8hUi#scrollTo=FxSyrc8j7oTY&printMode=true 1/2
10/4/23, 11:22 AM Lab No. 02(NB) - Colaboratory
print("Accuracy:", accuray)
print("F1 Score:", f1)
Accuracy: 0.8484848484848485
F1 Score: 0.8491119695890328
labels = [0,1,2]
cm = confusion_matrix(y_test, y_pred, labels=labels)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
disp.plot();
https://colab.research.google.com/drive/1StIyHH_OUAj9A7SMm6PXeBZxMBaS8hUi#scrollTo=FxSyrc8j7oTY&printMode=true 2/2