0% found this document useful (0 votes)
10 views4 pages

MSML603 HW Assignment 5

The document outlines a process for training Perceptron classifiers to distinguish between different pizza brands (A, B, and C) using nutritional data. It includes loading data from a CSV file, training classifiers for pairwise comparisons (A vs B, A vs C, B vs C), and computing margins for each classifier. Finally, it demonstrates how to classify new samples using majority voting based on the predictions from the trained classifiers.

Uploaded by

Sravani Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views4 pages

MSML603 HW Assignment 5

The document outlines a process for training Perceptron classifiers to distinguish between different pizza brands (A, B, and C) using nutritional data. It includes loading data from a CSV file, training classifiers for pairwise comparisons (A vs B, A vs C, B vs C), and computing margins for each classifier. Finally, it demonstrates how to classify new samples using majority voting based on the predictions from the trained classifiers.

Uploaded by

Sravani Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

from google.

colab import drive


drive.mount('/content/drive')

Mounted at /content/drive

import pandas as pd
df = pd.read_csv('/content/drive/MyDrive/Pizza (1).csv')
print(df.head())

brand id mois prot fat ash sodium carb cal


0 A 14069 27.82 21.43 44.87 5.11 1.77 0.77 4.93
1 A 14053 28.49 21.26 43.89 5.34 1.79 1.02 4.84
2 A 14025 28.35 19.99 45.78 5.08 1.63 0.80 4.95
3 A 14016 30.55 20.15 43.13 4.79 1.61 1.38 4.74
4 A 14005 30.49 21.28 41.65 4.82 1.64 1.76 4.67

import pandas as pd
data = pd.read_csv('/content/drive/MyDrive/Pizza (1).csv')

# Example: A vs B classification
ab_data = data[data['brand'].isin(['A', 'B'])] # Correct column name
is 'brand'
X_ab = ab_data[['mois', 'prot', 'fat', 'ash', 'sodium', 'carb',
'cal']].values # Correct column names for features
y_ab = (ab_data['brand'] == 'A').astype(int).values # A as 1, B as 0

# Display the first few rows to confirm the data is loaded correctly
print(ab_data.head())

brand id mois prot fat ash sodium carb cal


0 A 14069 27.82 21.43 44.87 5.11 1.77 0.77 4.93
1 A 14053 28.49 21.26 43.89 5.34 1.79 1.02 4.84
2 A 14025 28.35 19.99 45.78 5.08 1.63 0.80 4.95
3 A 14016 30.55 20.15 43.13 4.79 1.61 1.38 4.74
4 A 14005 30.49 21.28 41.65 4.82 1.64 1.76 4.67

(a) Train Margin Perceptrons

1. A vs B:
from sklearn.linear_model import Perceptron

# Filter data for A vs B


ab_data = data[data['brand'].isin(['A', 'B'])]
X_ab = ab_data[['mois', 'prot', 'fat', 'ash', 'sodium', 'carb',
'cal']].values
y_ab = (ab_data['brand'] == 'A').astype(int).values # A as 1, B as 0

# Train Perceptron
clf_ab = Perceptron(max_iter=1000, tol=1e-3)
clf_ab.fit(X_ab, y_ab)
# Hyperplane parameters for A vs B
weights_ab = clf_ab.coef_[0]
bias_ab = clf_ab.intercept_[0]

print("Hyperplane for A vs B:")


print("Weights:", weights_ab)
print("Bias:", bias_ab)

Hyperplane for A vs B:
Weights: [-74.48 20.17 53.74 6.69 2.75 -6.12 5.41]
Bias: 0.0

1. A vs C:
# Filter data for A vs C
ac_data = data[data['brand'].isin(['A', 'C'])]
X_ac = ac_data[['mois', 'prot', 'fat', 'ash', 'sodium', 'carb',
'cal']].values
y_ac = (ac_data['brand'] == 'A').astype(int).values # A as 1, C as 0

# Train Perceptron
clf_ac = Perceptron(max_iter=1000, tol=1e-3)
clf_ac.fit(X_ac, y_ac)

# Hyperplane parameters for A vs C


weights_ac = clf_ac.coef_[0]
bias_ac = clf_ac.intercept_[0]

print("Hyperplane for A vs C:")


print("Weights:", weights_ac)
print("Bias:", bias_ac)

Hyperplane for A vs C:
Weights: [-21.36 -8.12 27.69 1.09 1.15 0.7 2.19]
Bias: 0.0

1. B vs C:
# Filter data for B vs C
bc_data = data[data['brand'].isin(['B', 'C'])]
X_bc = bc_data[['mois', 'prot', 'fat', 'ash', 'sodium', 'carb',
'cal']].values
y_bc = (bc_data['brand'] == 'B').astype(int).values # B as 1, C as 0

# Train Perceptron
clf_bc = Perceptron(max_iter=1000, tol=1e-3)
clf_bc.fit(X_bc, y_bc)

# Hyperplane parameters for B vs C


weights_bc = clf_bc.coef_[0]
bias_bc = clf_bc.intercept_[0]

print("Hyperplane for B vs C:")


print("Weights:", weights_bc)
print("Bias:", bias_bc)

Hyperplane for B vs C:
Weights: [ 6.19 -31.96 17.23 0.63 1.49 7.91 0.6 ]
Bias: 0.0

To compute the margins for each of the classifiers, you can use the formula for the margin,
which is given by Margin = 2/|ω|

import numpy as np

def compute_margin(weights):
return 2 / np.linalg.norm(weights)

# Compute margins
margin_ab = compute_margin(weights_ab)
margin_ac = compute_margin(weights_ac)
margin_bc = compute_margin(weights_bc)

print("Margins:")
print("Margin A vs B:", margin_ab)
print("Margin A vs C:", margin_ac)
print("Margin B vs C:", margin_bc)

Margins:
Margin A vs B: 0.021127526507728367
Margin A vs C: 0.055540198413972064
Margin B vs C: 0.05303387598998595

Fusion Rule for Classification Using majority voting based on the predictions from the classifiers,
we can classify new samples.

# Define new samples


s1 = (49.29, 24.82, 21.68, 2.76, 0.52, 1.47, 3.00)
s2 = (30.95, 19.81, 42.28, 5.11, 1.67, 1.85, 4.67)
s3 = (50.33, 13.28, 28.43, 3.58, 1.03, 4.38, 3.27)

# Function to classify new samples


def classify(sample):
# Reshape the sample to fit the model input
sample = np.array(sample).reshape(1, -1)

# Get predictions
pred_ab = clf_ab.predict(sample)
pred_ac = clf_ac.predict(sample)
pred_bc = clf_bc.predict(sample)
# Majority voting
votes = [pred_ab[0], pred_ac[0], pred_bc[0]] # Convert
predictions to brand labels
# Translate binary predictions to brand labels
labels = ['A' if vote == 1 else 'B' if i == 1 else 'C' for i, vote
in enumerate(votes)]

# Determine the final label by majority voting


final_label = max(set(labels), key=labels.count)
return final_label

# Classify new samples


for i, sample in enumerate([s1, s2, s3], start=1):
print(f"Sample s{i} is classified as brand:", classify(sample))

Sample s1 is classified as brand: C


Sample s2 is classified as brand: A
Sample s3 is classified as brand: A

You might also like