0% found this document useful (0 votes)
4 views15 pages

Unit 3 Classification

Unit 3 covers classification in data mining, detailing techniques such as binary and multi-class classification, and the steps to build a classification model including data collection, preprocessing, feature selection, and model evaluation. It discusses various algorithms like Decision Trees and Naïve Bayes, explaining their workings, advantages, and disadvantages. Additionally, it introduces rule-based classifiers and the differences between eager and lazy learning algorithms.

Uploaded by

Ahi Krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views15 pages

Unit 3 Classification

Unit 3 covers classification in data mining, detailing techniques such as binary and multi-class classification, and the steps to build a classification model including data collection, preprocessing, feature selection, and model evaluation. It discusses various algorithms like Decision Trees and Naïve Bayes, explaining their workings, advantages, and disadvantages. Additionally, it introduces rule-based classifiers and the differences between eager and lazy learning algorithms.

Uploaded by

Ahi Krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Unit 3

Classification
 Basic Concepts
 Classification in data mining is a technique used to assign labels or classify each instance, record,
or data object in a dataset based on their features or attributes.

 Classification techniques can be divided into categories


1. Binary classification
2. Multi-class classification.
Binary classification assigns labels to instances into two classes, such as fraudulent or non
fraudulent.
Multi-class classification assigns labels into more than two classes, such as happy, neutral, or sad.
 Steps to Build a Classification Model
1. Data Collection:
The first step in building a classification model is data collection. In this step, the data relevant to the
problem at hand is collected.
The data should be representative of the problem and should contain all the necessary attributes and
labels needed for classification

2. Data Preprocessing:
The collected data needs to be pre-processed to ensure its quality. This involves handling missing
values, dealing with outliers, and transforming the data into a format suitable for analysis.

3. Feature Selection:
Feature selection involves identifying the most relevant attributes in the dataset for classification. This
can be done using various techniques, such as correlation analysis, information gain, and principal
component analysis.
4. Principal Component Analysis:
Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of the dataset.
PCA identifies the most important features in the dataset and removes the redundant ones.
5. Model Selection:
Model selection involves selecting the appropriate classification algorithm for the problem at hand.
There are several algorithms available, such as decision trees, support vector machines, and neural
networks.

Department of BCA, AIBM Page 1


6. Model Training:
Model training involves using the selected classification algorithm to learn the patterns in the data.
The data is divided into a training set and a validation set. The model is trained using the training set,
and its performance is evaluated on the validation set.
7. Model Evaluation:
Model evaluation involves assessing the performance of the trained model on a test set. This is done to
ensure that the model generalizes well.

 Real-Life Examples
 Email spam classification
 Image classification
 Medical diagnosis
 Credit risk analysis
 Sentiment analysis
 Customer segmentation
 Fraud detection

 Categorization of Classification in Data Mining


 Decision tree-based classification
 Rule-based classification
 Bayesian classification
 Neural network-based classification
 K-nearest neighbour

Algorithms Decision Tree Induction


 Decision Tree induction is a Supervised learning technique that can be used for both
classification and Regression problems, but mostly it is preferred for solving Classification
problems.
 It is easy to understand. It split data into branches to make decisions and predictions.
 It is a tree-structured classifier, where internal nodes represent the features of a dataset,
branches represent the decision rules and each leaf node represents the outcome.
 In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node.
 Decision nodes are used to make any decision and have multiple branches, whereas Leaf nodes are
the output of those decisions and do not contain any further branches.

Department of BCA, AIBM Page 2


 Decision Tree Terminologies
1. Root Node: Root node is from where the decision tree starts. It represents the entire dataset,
which further gets divided into two or more homogeneous sets.
2. Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further
after getting a leaf node.
3. Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes
according to the given conditions.
4. Branch/Sub Tree: A tree formed by splitting the tree.
5. Pruning: Pruning is the process of removing the unwanted branches from the tree.
6. Parent/Child node: The root node of the tree is called the parent node, and other nodes are
called the child nodes.
 Algorithm for Decision tree induction

Step-1: Begin the tree with the root node, says S, which contains the complete dataset.

Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).

Step-3: Divide the S into subsets that contains possible values for the best attributes.

Step-4: Generate the decision tree node, which contains the best attribute.

Step-5: Recursively make new decision trees using the subsets of the dataset created in step -
3.
Continue this process until a stage is reached where you cannot further classify the nodes and called
the final node as a leaf node.

Department of BCA, AIBM Page 3


 Example

In the below diagram the tree will first ask what is the weather? Is it sunny, cloudy, or rainy? If yes then it
will go to the next feature which is humidity and wind. It will again check if there is a strong wind or
weak, if it’s a weak wind and it’s rainy then the person may go and play.

 Rules:

If weather = Cloudy, then play = Yes.

If weather = Sunny, Humidity = high, then play = No.

If weather = Sunny, Humidity = Normal, then play = Yes.

If weather = Rainy, wind = Strong then play =no.

If weather = Rainy, wind = Weak then play = yes.

 Attribute Selection Measures

While implementing a Decision tree, the main issue arises that how to select the best attribute for the
root node and for sub-nodes. So, to solve such problems there is a technique which is called as
Attribute selection measure or ASM. By this measurement, we can easily select the best attribute for
the nodes of the tree.

Department of BCA, AIBM Page 4


There are Three popular techniques for ASM, which are:

1. Information Gain
2. Gini Index
3. Gain ratio
1. Information Gain:

Information gain is used for deciding the best features/attributes that render maximum data about a
class. It follows the method of entropy while aiming at reducing the level of entropy, starting from the
root node to the leaf nodes.
2. Gain ratio
 The information gain measure is biased approaching tests with several results. It can select attributes
having a high number of values. For instance, consider an attribute that facilitates as a unique identifier,
including product ID.

3. Gini index − The Gini index can be used in CART. The Gini index calculates the impurity of D, a data partition
or collection of training tuples, :

 Entropy: Entropy is a metric to measure the impurity in a given attribute. It specifies randomness in
data. Entropy can be calculated as:
 Tree Pruning
A technique to reduce the size of a Decision-Tree by removing certain branches or nodes w/o
significantly affecting the model’s accuracy.

Popular Approaches to Tree Pruning


1) Pre-Pruning
• Involves stopping the tree-building process early before it becomes too complex or overfits the
training-data.
• It sets criteria to limit the tree's growth, such as maximum depth

minimum number of samples per Leaf-node, or maximum number of Leaf-


nodes.
2) Post-Pruning
• Involves growing the Decision-Tree to its full size and then selectively removing nodes.

• This evaluates the effect of removing each node on a validation-set and prunes nodes that do not
improve the model's performance.

Department of BCA, AIBM Page 5


Figure : An unpruned Decision-Tree and a pruned

 Advantages of the Decision Tree

 It is simple to understand as it follows the same process which a human follow while making any
decision in real-life.
 It can be very useful for solving decision-related problems. o It helps to think about all the possible
outcomes for a problem. o There is less requirement of data cleaning compared to other algorithms.

 Disadvantages of the Decision Tree

 The decision tree contains lots of layers, which makes it complex.


 It may have an overfitting issue, which can be resolved using the Random Forest algorithm. o For
more class labels, the computational complexity of the decision tree may increase.

 Bayes Classification Methods Bayes' Theorem:

 Bayes' theorem is also known as Bayes' Rule or Bayes' law, which is used to determine the
probability of a hypothesis with prior knowledge. I
 t depends on the conditional probability.
 The formula for Bayes' theorem is given as:

Where,

P(A|B) is Posterior probability: Probability of hypothesis A on the observed event B.

P(B|A) is Likelihood probability: Probability of the evidence given that the probability of a hypothesis is
true.

P(A) is Prior Probability: Probability of hypothesis before observing the evidence.

P(B) is Marginal Probability: Probability of Evidence.

Naïve Bayes Classifier Algorithm o Naïve Bayes algorithm is a supervised learning algorithm, which is
based on Bayes theorem and used for solving classification problems.
 It is mainly used in text classification that includes a high-dimensional training dataset.
 It is a probabilistic classifier, which means it predicts on the basis of the probability of an
object.
 Some popular examples of Naïve Bayes Algorithm are spam filtration, Sentimental analysis, and
classifying articles.

Working of Naïve Bayes' Classifier:

Working of Naïve Bayes' Classifier can be understood with the help of the below example:

Department of BCA, AIBM Page 6


Suppose we have a dataset of weather conditions and corresponding target variable "Play". So using this
dataset we need to decide that whether we should play or not on a particular day according to the weather
conditions. So to solve this problem, we need to follow the below steps:

1. Convert the given dataset into frequency tables.


2. Generate Likelihood table by finding the probabilities of given features.
3. Now, use Bayes theorem to calculate the posterior probability.

Problem: If the weather is sunny, then the Player should play or not?

Solution: To solve this, first consider the below dataset:


Outlook Play
0 Rainy Yes
1 Sunny Yes
2 Overcast Yes
3 Overcast Yes
4 Sunny No
5 Rainy Yes
6 Sunny Yes
7 Overcast Yes
8 Rainy No
9 Sunny No
10 Sunny Yes
11 Rainy No
12 Overcast Yes
13 Overcast Yes
Frequency table for the Weather Conditions:
Weather Yes No
Overcast 5 0
Rainy 2 2
Sunny 3 2
Total 10 5
Likelihood table weather condition:
Weather No Yes

5/14=
Overcast 0 5
0.35
Rainy 2 2 4/14=0.29
Sunny 2 3 5/14=0.35
All 4/14=0.29 10/14=0.71
Applying Bayes'theorem:

Department of BCA, AIBM Page 7


P(Yes|Sunny)= P(Sunny|Yes)*P(Yes)/P(Sunny)

P(Sunny|Yes)= 3/10= 0.3

P(Sunny)= 0.35

P(Yes)=0.71

So P(Yes|Sunny) = 0.3*0.71/0.35= 0.60

P(No|Sunny)= P(Sunny|No)*P(No)/P(Sunny)

P(Sunny|NO)= 2/4=0.5

P(No)= 0.29

P(Sunny)= 0.35

So P(No|Sunny)= 0.5*0.29/0.35 = 0.41

So as we can see from the above calculation that P(Yes|Sunny)>P(No|Sunny)

Hence on a Sunny day, Player can play the game.

Types of Naïve Bayes Model:

There are three types of Naive Bayes Model, which are given below:

o Gaussian: The Gaussian model assumes that features follow a normal distribution. This means if
predictors take continuous values instead of discrete, then the model assumes that these values
are sampled from the Gaussian distribution.
o Multinomial: The Multinomial Naïve Bayes classifier is used when the data is multinomial
distributed. It is primarily used for document classification problems, it means a particular
document belongs to which category such as Sports, Politics, education, etc.
The classifier uses the frequency of words for the predictors.
o Bernoulli: The Bernoulli classifier works similar to the Multinomial classifier, but the predictor
variables are the independent Booleans variables. Such as if a particular word is present or not in a
document. This model is also famous for document classification tasks.

Advantages of Naïve Bayes Classifier:


o Naïve Bayes is one of the fast and easy ML algorithms to predict a class of datasets.
o It can be used for Binary as well as Multi-class Classifications.
o It performs well in Multi-class predictions as compared to the other Algorithms. o It is the most
popular choice for text classification problems.

Disadvantages of Naïve Bayes Classifier: o Naive Bayes assumes that all features are independent or
unrelated, so it cannot learn the relationship between features.

Department of BCA, AIBM Page 8


Applications of Naïve Bayes Classifier:
 It is used for Credit Scoring.
 It is used in medical data
classification.
 It can be used in real-time predictions because Naïve Bayes Classifier is an eager
learner.
 It is used in Text classification such as Spam filtering and Sentiment analysis.

 Rule-Based Classifier
 Rule-based classifiers are just another type of classifier which makes the class decision
depending by using various “if. Else” rules.
 These rules are easily interpretable and thus these classifiers are generally used to generate
descriptive models.
 The condition used with “if” is called the antecedent.
 The predicted class of each rule is called the consequent.

Let us consider a Rule R1,

R1: IF age = youth AND student = yes THEN buy_computer = yes


Note − We can also write rule R1 as follows −
R1: (age = youth) ^ (student = yes))(buys computer = yes)

If the condition holds true for a given tuple, then the antecedent is satisfied.

 Rule Extraction

Rule-based classifier by extracting IF-THEN rules from a decision tree.

 To extract a rule from a decision tree −

 One rule is created for each path from the root to the leaf node.
 To form a rule antecedent, each splitting criterion is logically ANDed.
 The leaf node holds the class prediction, forming the rule consequent.

 Example:

Explanation(refer class notes)

Department of BCA, AIBM Page 9


 Rule Induction Using a Sequential Covering Algorithm
This Algorithm is a rule learning algorithm used to construct rule-based classifiers.
It iteratively learns rules from the data, focusing on one class at a time.
 Algorithm
1. Initialization
Initialize an empty set of rules.
2. Selecting a Class
Choose a class that has not been covered by any existing rule.
3. Rule Learning
Start with an empty rule for the selected class.
 Lazy learning algorithm:
Eager Learners and Lazy Learners are 2 categories of classification-models.
1. Eager Learners (Model-Based Learners)
 Process the training data immediately and build a generalized model during the
training phase.
 Example algorithms include Decision Trees, Support Vector Machines (SVM),
Naive Bayes, Linear Regression.
2. Lazy Learners (Instance-Based Learners)
 Delay processing of training data until a new instance needs to be classified or predicted.
 Example algorithms include k-Nearest Neighbors (k-NN), Case-Based Reasoning (CBR)

Department of BCA, AIBM Page 10


 Advantages of Lazy Learning
Simplicity: Often simple to implement and understand.
Flexibility: Can handle complex patterns and adapt to new data quickly.
No Training Time: Eliminates the training phase, making it useful for applications where new data
arrives frequently.

 K-Nearest Neighbour:
 K-Nearest Neighbour is one of the simplest Algorithm based on
Supervised Learning technique.
 K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for the
Classification problems.
 K-NN algorithm is a distance based algorithm.
 K-Nearest Neighbors is also called as a lazy learner algorithm because it does not learn from the
training set immediately instead it stores the dataset and at the time of classification it performs an
action on the dataset.

Why do we need a K-NN Algorithm?


Consider the below diagram:

The new point is classified as Category 2 because most of its closest neighbours are blue squares. KNN assigns
the category based on the majority of nearby points.
The image shows how KNN predicts the category of a new data point based on its closest neighbours.
 The red diamonds represent Category 1 and the blue squares represent Category 2.
 The new data point checks its closest neighbours (circled points).
 Since the majority of its closest neighbours are blue squares (Category 2) KNN predicts the new data point
belongs to Category 2.
KNN works by using proximity and majority voting to make predictions.

How does K-NN work?


The K-NN working can be explained on the basis of the below algorithm:
Step-1: Select the number K of the neighbors
Step-2: Calculate the Euclidean distance of K number of neighbors

Department of BCA, AIBM Page 11


Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
Step-4: Among these k neighbors, count the number of the data points in each category.
Step-5: Assign the new data points to that category for which the number of the neighbor is maximum.
Step-6: Our model is ready.

How to select the value of K in the K-NN Algorithm?


Below are some points to remember
r while selecting the value of K in the K-NN algorithm:
There is no particular way to determine the best value for "K", so we need to try some values to find the
best out of them. The most preferred value for K is 5.
A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of outliers in the model.
Large values for K are good, but it may find some difficulties.
Example:
Refer class notes

Applications:
 Handwriting detection application
 Image recognition
 Video recognition Advantages of KNN Algorithm:
 It is simple to implement.
 It is robust to the noisy training data
 It can be more effective if the training data is large.
Disadvantages of KNN Algorithm:
 Always needs to determine the value of K which may be complex some time.

Department of BCA, AIBM Page 12


 The computation cost is high because of calculating the distance between the data points for all the
training samples.

In data mining and machine learning, prediction, precision, and recall are key concepts used to evaluate the
performance of classification models.

Confusion Matrix

True Positives (TP): when the actual value is Positive and predicted is also Positive.

True negatives (TN): when the actual value is Negative and prediction is also Negative.

False positives (FP): When the actual is negative but prediction is Positive.

Also known as the Type 1 error

False negatives (FN): When the actual is Positive but the prediction is Negative.

Also known as the Type 2 error

Prediction

 Definition: Prediction refers to the process of using a model to estimate or classify the value of an
unknown outcome based on input features. In the context of classification, it's about assigning a
class label to an instance.

We have a total of 20 cats and dogs and our model predicts whether it is a cat or not.

Department of BCA, AIBM Page 13


Classification Measure

Basically, it is an extended version of the confusion matrix.

There are measures other than the confusion matrix which can help achieve better understanding and
analysis of our model and its performance.

a. Accuracy

b. Precision

c. Recall (TPR, Sensitivity)


Accuracy:

Accuracy simply measures how often the classifier makes the correct prediction. It’s the ratio between the
number of correct predictions and the total number of predictions.

Department of BCA, AIBM Page 14


Precision:

It is a measure of correctness that is achieved in true prediction. In simple words, it tells us how many
predictions are actually positive out of all the total positive predicted.

Recall:

It is a measure of actual observations which are predicted correctly.

Department of BCA, AIBM Page 15

You might also like