0% found this document useful (0 votes)
7 views13 pages

Naïve Bayesian Classifier and K-Means Clustering

The document discusses two machine learning techniques: Naïve Bayesian Classifier and K-Means Clustering. It explains the principles and applications of each method, including spam detection for Naïve Bayes and customer segmentation for K-Means. The document also provides mathematical formulations and examples to illustrate how these algorithms function in practice.

Uploaded by

megwejohnmwangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

Naïve Bayesian Classifier and K-Means Clustering

The document discusses two machine learning techniques: Naïve Bayesian Classifier and K-Means Clustering. It explains the principles and applications of each method, including spam detection for Naïve Bayes and customer segmentation for K-Means. The document also provides mathematical formulations and examples to illustrate how these algorithms function in practice.

Uploaded by

megwejohnmwangi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Page 1 of 13 - Cover Page Submission ID trn:oid:::29034:86246888

Submission
My Files

My Files

University

Document Details

Submission ID

trn:oid:::29034:86246888 11 Pages

Submission Date 1,085 Words

Mar 17, 2025, 1:20 AM GMT+5:30


5,901 Characters

Download Date

Mar 17, 2025, 1:21 AM GMT+5:30

File Name

Naïve Bayesian Classifier and K means.docx

File Size

98.2 KB

Page 1 of 13 - Cover Page Submission ID trn:oid:::29034:86246888


Page 2 of 13 - AI Writing Overview Submission ID trn:oid:::29034:86246888

0% detected as AI Caution: Review required.

The percentage indicates the combined amount of likely AI-generated text as It is essential to understand the limitations of AI detection before making decisions
well as likely AI-generated text that was also likely AI-paraphrased. about a student’s work. We encourage you to learn more about Turnitin’s AI detection
capabilities before using the tool.

Detection Groups
1 AI-generated only 0%
Likely AI-generated text from a large-language model.

2 AI-generated text that was AI-paraphrased 0%


Likely AI-generated text that was likely revised using an AI-paraphrase tool
or word spinner.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
writing that is likely AI generated as AI generated and AI paraphrased or likely AI generated and AI paraphrased writing as only AI generated) so it should not be used as the sole basis for
adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any
academic misconduct has occurred.

Frequently Asked Questions

How should I interpret Turnitin's AI writing percentage and false positives?


The percentage shown in the AI writing report is the amount of qualifying text within the submission that Turnitin’s AI writing
detection model determines was either likely AI-generated text from a large-language model or likely AI-generated text that was
likely revised using an AI-paraphrase tool or word spinner.

False positives (incorrectly flagging human-written text as AI-generated) are a possibility in AI models.

AI detection scores under 20%, which we do not surface in new reports, have a higher likelihood of false positives. To reduce the
likelihood of misinterpretation, no score or highlights are attributed and are indicated with an asterisk in the report (*%).

The AI writing percentage should not be the sole basis to determine whether misconduct has occurred. The reviewer/instructor
should use the percentage as a means to start a formative conversation with their student and/or use it to examine the submitted
assignment in accordance with their school's policies.

What does 'qualifying text' mean?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be likely AI-generated will be
highlighted in cyan in the submission, and likely AI-generated and then likely AI-paraphrased will be highlighted purple.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

Page 2 of 13 - AI Writing Overview Submission ID trn:oid:::29034:86246888


Page 3 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

Naïve Bayesian Classifier and K-Means Clustering

Student’s Name

Institution Affiliation

Professor’s name

Course

Date

Page 3 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 4 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

Naïve Bayesian Classifier and K-Means Clustering

Part 1: Naïve Bayesian Classifier

1. Concept Explanation

Machine learning uses the Naïve Bayesian Classifier as a probabilistic algorithm for

classification operations. This algorithm relies on Bayes' theorem, assuming features become

independent when the class is known. Despite using a simplified conditional independence

assumption, the Naïve Bayesian Classifier functions effectively for spam detection, sentiment

analysis, and medical diagnosis. Feature independence occurs only after the class label has been

provided.

Assumptions:

1. Conditional Independence – The algorithm bases its operation on a rule that states

features show independence from one another when the class label serves as input.

2. Equal Importance of Features – Each feature contributes equally to the classification.

3. Prior Probabilities Are Used – The model relies on prior knowledge (base rates of

classes).

Mathematically, Bayes' theorem is given by:

𝑃(𝑋|𝐶)𝑃(𝐶)
𝑃(𝐶|𝑋) =
𝑃(𝑋)

Where:

 𝑃(𝐶|𝑋) is the posterior probability of class C given feature set X.

 𝑃(𝑋|𝐶) is the likelihood of feature set X given class C.

 𝑃(𝐶) is the prior probability of class C.

 𝑃(𝑋) is the marginal probability of feature set X.

Page 4 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 5 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

For multiple features X = (X1, X2, ..., Xn), the Naïve Bayes assumption simplifies:
𝑛
𝑃(𝐶)𝛱ⅈ=1 𝑃(𝑋ⅈ |𝐶)
𝑃(𝐶|𝑋) =
𝑃(𝑋)

This allows for efficient computation in classification problems.

2. Example with Explanation

Application: Spam Email Detection

Spam detection involves categorizing emails into spam and valid messages (ham). The

primary purpose is to develop a predictive model for identifying spam emails based on word

frequency patterns and additional characteristics.

Classification Objective

The goal is to determine the probability of an email being spam given a set of observed

words. This is achieved using the Naïve Bayes classifier, which assumes that the presence of

each word in the email is independent of the others, given the class label.

3. Sample Problem & Solution

Dataset

Consider a small dataset of emails with the presence (1) or absence (0) of specific keywords:

Email ID "Free" "Win" "Money" "Offer" Spam (1=Yes, 0=No)

1 1 1 0 1 1

2 0 1 1 0 0

3 1 1 1 1 1

4 0 0 1 0 0

5 1 0 1 1 1

Page 5 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 6 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

We classify a new email with: ("Free"=1, "Win"=1, "Money"=1, "Offer"=0).

Step-by-Step Calculation using Bayes' Theorem

Calculate Priors:
3
 P(Spam) = = 0.6
5

2
 P(Not Spam) = = 0.4
5

Calculate Likelihoods:
2
 P(Free=1∣Spam) = = 0.67
3

2
 P(Win=1∣Spam) = = 0.67
3

2
 P(Money=1∣Spam) = = 0.67
3

1
 P(Offer=0∣Spam) = = 0.33
3

0
 P(Free=1∣Not Spam) = = 0.00
2

1
 P(Win=1∣Not Spam) = = 0.5
2

1
 P(Money=1∣Not Spam) = = 0.5
2

1
 P(Offer=0∣Not Spam) = = 0.5
2

Compute Posteriors:

 P(Spam∣X) ∝ 0.6 × (0.67 × 0.67 × 0.67 × 0.33)

 P(Not Spam∣X) ∝ 0.4 × (0.00 × 0.50 × 0.50 × 0.50)

Since P(Not Spam∣X) is 0, the classification is Spam.

Python Code for Naïve Bayes Implementation

from sklearn.naive_bayes import BernoulliNB


import numpy as np

Page 6 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 7 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

# Training dataset
X_train = np.array([[1,1,0,1], [0,1,1,0], [1,1,1,1], [0,0,1,0], [1,0,1,1]])
y_train = np.array([1, 0, 1, 0, 1]) # 1 = Spam, 0 = Not Spam

# New email sample


X_test = np.array([[1,1,1,0]])

# Model training
nb_model = BernoulliNB()
nb_model.fit(X_train, y_train)

# Prediction
prediction = nb_model.predict(X_test)
print("Prediction:", "Spam" if prediction[0] == 1 else "Not Spam")
Output:

Prediction: Spam

Page 7 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 8 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

Part 2: K-Means Clustering

1. Concept Explanation

Machine learning techniques implement clustering as an unsupervised approach that

groups data points through their shared features. Clustering algorithms detect natural data

groupings in an unsupervised manner since they work without predefined categories. Clustering

serves multiple functions, including market segmentation and anomaly detection, image

processing, and biological data analysis.

Definition of K-Means Clustering

K-Means Clustering is commonly used in marketing to segment customers based on

spending behavior and income levels. This allows businesses to target specific customer groups

with personalized promotions.

K-Means follows three main steps:

1. Centroid Selection:

o Randomly select K initial centroids from the dataset.

2. Cluster Assignment:

o Each data point is assigned to the nearest centroid based on the Euclidean

distance.

3. Centroid Updating:

o Compute the new centroid by taking the mean of all points in the cluster.

o Repeat until centroids no longer change significantly (convergence).

The centroid of a cluster is mathematically represented as:

Page 8 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 9 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

𝑛
1
𝐶𝑘 = ∑ 𝑥ⅈ
𝑛
ⅈ=1

where:

 𝐶𝑘 is the centroid of cluster k,

 𝑥ⅈ represents the data points in cluster k,

 n is the number of points in the cluster.

The Euclidean distance used for assigning clusters:

𝑛
2
𝑑(𝑥, 𝐶𝑘 ) = √∑(𝑥𝑗 − 𝐶𝑘𝑗 )
0=1

where:

 x is a data point,

 Ck is the cluster centroid,

 m is the number of features.

2. Customer Segmentation

The marketing industry uses K-Means Clustering as a popular technique to divide

customers by their purchasing activities and financial capability. Businesses use this approach to

deliver advertisements that cater specifically to discernible customer demographics.

3. Sample Problem & Solution

Customer ID Annual Income ($1000s) Spending Score (1-100)

1 15 39

Page 9 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 10 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

2 16 81

3 17 6

4 18 77

5 20 40

6 24 94

7 25 3

8 30 73

9 35 92

10 40 8

Step 1: Initial Centroid Selection

Randomly selecting K=3 centroids:

 C1 (Low Income, Low Spending): (15, 39)

 C2 (Middle Income, High Spending): (24, 94)

 C3 (High Income, Low Spending): (40, 8)

Step 2: Cluster Assignment (Iteration 1)

Using Euclidean distance to compute the 3 centroids to each customer, and assigning it to

the nearest one.

Example Calculation for Customer 1 (15, 39)

Distance to C1 (15,39):

𝑑1 = √(15 − 15)2 + (39 − 39)2 = 0

(Customer 1 stays in Cluster 1)

Distance to C2 (24,94):

Page 10 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 11 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

𝑑1 = √(15 − 24)2 + (39 − 94)2 = 55.1

Distance to C3 (40,8):

𝑑1 = √(15 − 40)2 + (39 − 8)2 = 39.5

Step 3: Centroid Update

Example for Cluster 1 (Customers: 1, 3, 5, 7, 10):

New centroid:

51+17+20+25+40 39+6+40+3+8
C1 = ( , ) = (23.4,19.2)
5 5

Final Cluster Assignments


Customer ID Annual Income ($1000s) Spending Score (1-100) Final Cluster
1 15 39 1
2 16 81 2
3 17 6 1
4 18 77 2
5 20 40 1
6 24 94 2
7 25 3 1
8 30 73 2
9 35 92 2
10 40 8 1

Page 11 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 12 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

10

# Dataset: Customer Income & Spending Score


X = np.array([
[15, 39], [16, 81], [17, 6], [18, 77], [20, 40],
[24, 94], [25, 3], [30, 73], [35, 92], [40, 8]
])

# Apply K-Means Clustering


kmeans = KMeans(n_clusters=3, random_state=42)
kmeans.fit(X)

# Cluster assignments
labels = kmeans.labels_
centroids = kmeans.cluster_centers_

# Plot the clusters


plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', marker='o', edgecolor='k')
plt.scatter(centroids[:, 0], centroids[:, 1], s=300, c='red', marker='X', label='Centroids')
plt.xlabel('Annual Income ($1000s)')
plt.ylabel('Spending Score (1-100)')
plt.title('K-Means Customer Segmentation')
plt.legend()
plt.show()

# Print cluster assignments


print("Final Cluster Assignments:", labels)
Output

Page 12 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888


Page 13 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

11

Page 13 of 13 - AI Writing Submission Submission ID trn:oid:::29034:86246888

You might also like