0% found this document useful (0 votes)
55 views20 pages

Machine Learning Techniques Lab: Session: 2023-24, Even Semester

Uploaded by

spyroisop777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views20 pages

Machine Learning Techniques Lab: Session: 2023-24, Even Semester

Uploaded by

spyroisop777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

MACHINE LEARNING TECHNIQUES LAB

B.Tech. Semester VI

Subject Code: KAI- 651

Session: 2023-24, Even Semester


Submitted To Submitted By
Mr. Shantanu Pant Deepak Kumar
2102301640010
AIML

DRONACHARYA GROUP OF INSTITUTIONS


DEPARTMENT OF CSE (AIML)
#27 KNOWLEDGE PARK 3 GREATER

NOIDA

AFFILATED TO Dr. ABDUL KALAM TECHNICAL UNIVERSITY,


LUCKNOW
Index

S No. List of Experiments Date Signature


Implement and demonstrate the FIND-S algorithm for
finding the most specific hypothesis based on a given set of 26/02/2024
1
training data samples. Read the training data from a .CSV
file.
For a given set of training data examples stored in a .CSV
2 file, implement and demonstrate the Candidate Elimination
04/03/2024
algorithm to output a description of the set of all hypotheses
consistent with the training examples.
Write a program to demonstrate the working of the decision
tree based ID3 algorithm. Use an appropriate data set for 11/03/2024
3
building the decision tree and apply this knowledge to
classify a new sample
Build an Artificial Neural Network by implementing the
4 Backpropagation algorithm and test the same using 18/03/2024
appropriate data sets.
Write a program to implement the naïve Bayesian classifier
5 and compute the accuracy of the classifier 01/04/2024

6
Assuming a set of documents that need to be classified, use 08/04/2024
the naïve Bayesian Classifier model to perform this task.
7 Write a program to construct a Bayesian network 15/04/2024
Apply EM algorithm to cluster a set of data stored in a .CSV
8 file. Use the same data set for clustering using k Means 22/04/2024
algorithm.
Write a program to implement k-Nearest Neighbour
9 algorithm to classify the iris data set. Print both correct and 29/04/2024
wrong predictions.
Implement the non-parametric Locally Weighted
10 Regression algorithm in order to fit data points. Select 29/04/2024
appropriate data set for your experiment and draw graphs
Experiment 1

Objective: Implement and demonstrate the FIND-S algorithm for finding the most specific
hypothesis based on a given set of training data samples. Read the training data from a .CSV file.

Solution

import csv
def find_s_algorithm(data):
# Initialize the hypothesis with the first training instance
hypothesis = data[0][:-1] # Take attributes from the first instance
# Iterate over the rest of the training instances
for instance in data:
if instance[-1] == 'Yes': # Check if it's a positive example
for i in range(len(hypothesis)):
# If attribute value in hypothesis doesn't match instance, make it more general
if hypothesis[i] != instance[i]:
hypothesis[i] = '?' # '?' represents any value
return hypothesis
def main():
# Load training data from CSV file
with open('training_data.csv', 'r') as file:
reader = csv.reader(file)
data = list(reader)
# Apply FIND-S algorithm to find the most specific hypothesis
hypothesis = find_s_algorithm(data)
# Print the most specific hypothesis
print("The most specific hypothesis is:", hypothesis)
if __name__ == "__main__":
main()

Output

The most specific hypothesis is: ['Sunny', 'Warm', '?', 'Strong', '?', '?']
Experiment 2

Objective: For a given set of training data examples stored in a .CSV file, implement and
demonstrate the Candidate Elimination algorithm to output a description of the set of all
hypotheses consistent with the training examples.

Solution

import csv

def initialize_hypotheses(n):
# Initialize the set of hypotheses
hypotheses = []
for i in range(n):
hypotheses.append(['?',] * n)
return hypotheses

def candidate_elimination_algorithm(data):
# Initialize the version space
n = len(data[0]) - 1
hypotheses = initialize_hypotheses(n)
specific_h = ['0'] * n # Initialize the most specific hypothesis
general_h = ['?'] * n # Initialize the most general hypothesis

# Iterate through the training examples


for example in data:
if example[-1] == 'Yes': # Positive example
for i in range(n):
if specific_h[i] == '0':
specific_h[i] = example[i]
elif specific_h[i] != example[i]:
specific_h[i] = '?'
# Remove inconsistent hypotheses from the general_h
for j in range(n):
if example[j] != specific_h[j] and hypotheses[j][j] != '?':
hypotheses[j][j] = '?'
else: # Negative example
for i in range(n):
if example[i] != specific_h[i]:
hypotheses[i][i] = specific_h[i]
else:
hypotheses[i][i] = '?'

# Refine the general_h


for i in range(n):
if general_h[i] == '?':
general_h[i] = specific_h[i]

return hypotheses, specific_h, general_h


def main():
# Load training data from CSV file
with open('training_data.csv', 'r') as file:
reader = csv.reader(file)
data = list(reader)

# Apply Candidate Elimination algorithm


hypotheses, specific_h, general_h = candidate_elimination_algorithm(data)

# Print the hypotheses


print("The set of all hypotheses consistent with the training examples:")
print("Specific Hypothesis:", specific_h)
print("General Hypothesis:", general_h)
print("Version Space Hypotheses:")
for h in hypotheses:
print(h)

if __name__ == "__main__":
main()

output

The set of all hypotheses consistent with the training examples:


Specific Hypothesis: ['Sunny', 'Warm', '?', 'Strong', '?', '?']
General Hypothesis: ['Sunny', 'Warm', '?', 'Strong', '?', '?']
Version Space Hypotheses:
['?', 'Warm', '?', '?', '?', '?']
['Sunny', '?', '?', '?', '?', '?']
['?', '?', '?', 'Strong', '?', '?']
['?', '?', '?', '?', '?', '?']
['?', '?', '?', '?', '?', '?']
['?', '?', '?', '?', '?', '?']
Experiment 3

Objective: Write a program to demonstrate the working of the decision tree based ID3 algorithm.
Use an appropriate data set for building the decision tree and apply this knowledge to classify a
new sample

Solution

import numpy as np

class Node:
def __init__(self, attribute):
self.attribute = attribute
self.children = {}

def add_child(self, value, node):


self.children[value] = node

def entropy(class_labels):
_, counts = np.unique(class_labels, return_counts=True)
probabilities = counts / len(class_labels)
entropy_value = -np.sum(probabilities * np.log2(probabilities))
return entropy_value

def information_gain(data, target, attribute):


# Calculate the entropy before splitting
total_entropy = entropy(data[target])

# Calculate the weighted entropy after splitting


weighted_entropy = 0
_, counts = np.unique(data[attribute], return_counts=True)
for value, count in zip(data[attribute].unique(), counts):
subset = data[data[attribute] == value]
weighted_entropy += (count / len(data)) * entropy(subset[target])

# Calculate information gain


information_gain_value = total_entropy - weighted_entropy
return information_gain_value

def id3(data, target, attributes):


if len(np.unique(data[target])) == 1:
return np.unique(data[target])[0]
elif len(attributes) == 0:
return np.unique(data[target])[np.argmax(np.unique(data[target], return_counts=True)[1])]
else:
best_attribute = max(attributes, key=lambda x: information_gain(data, target, x))
tree = Node(best_attribute)
attributes = [attribute for attribute in attributes if attribute != best_attribute]
for value in data[best_attribute].unique():
subset = data[data[best_attribute] == value]
subtree = id3(subset, target, attributes)
tree.add_child(value, subtree)
return tree

def predict(tree, sample):


if not isinstance(tree, Node):
return tree

attribute_value = sample[tree.attribute]
if attribute_value not in tree.children:
return "Can't classify"
return predict(tree.children[attribute_value], sample)

def main():
# Define the dataset
data = {
'Outlook': ['Sunny', 'Sunny', 'Overcast', 'Rain', 'Rain', 'Rain', 'Overcast', 'Sunny', 'Sunny', 'Rain',
'Sunny', 'Overcast', 'Overcast', 'Rain'],
'Temperature': ['Hot', 'Hot', 'Hot', 'Mild', 'Cool', 'Cool', 'Cool', 'Mild', 'Cool', 'Mild', 'Mild', 'Mild',
'Hot', 'Mild'],
'Humidity': ['High', 'High', 'High', 'High', 'Normal', 'Normal', 'Normal', 'High', 'Normal', 'Normal',
'Normal', 'High', 'Normal', 'High'],
'Wind': ['Weak', 'Strong', 'Weak', 'Weak', 'Weak', 'Strong', 'Strong', 'Weak', 'Weak', 'Weak',
'Strong', 'Strong', 'Weak', 'Strong'],
'PlayTennis': ['No', 'No', 'Yes', 'Yes', 'Yes', 'No', 'Yes', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'Yes', 'No']
}

# Convert data to pandas DataFrame


import pandas as pd
df = pd.DataFrame(data)

# Get attributes and target


attributes = df.columns[:-1]
target = df.columns[-1]

# Build the decision tree using ID3 algorithm


decision_tree = id3(df, target, attributes)

# Print the decision tree


print("Decision Tree:")
print(decision_tree.__dict__)

# Test with a new sample


sample = {'Outlook': 'Sunny', 'Temperature': 'Cool', 'Humidity': 'High', 'Wind': 'Weak'}
prediction = predict(decision_tree, sample)
print("\nPredicted class for sample {}: {}".format(sample, prediction))
if __name__ == "__main__":
main()

Output

Decision Tree:
{'attribute': 'Outlook', 'children': {'Sunny': {'attribute': 'Humidity', 'children': {'High': 'No', 'Normal':
'Yes'}}, 'Overcast': 'Yes', 'Rain': {'attribute': 'Wind', 'children': {'Weak': 'Yes', 'Strong': 'No'}}}}

Predicted class for sample {'Outlook': 'Sunny', 'Temperature': 'Cool', 'Humidity': 'High', 'Wind':
'Weak'}: No
Experiment 4

Objective: Build an Artificial Neural Network by implementing the Backpropagation algorithm


and test the same using appropriate data sets.

Solution

import numpy as np

class NeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size

# Initialize weights and biases


self.weights_input_hidden = np.random.randn(self.input_size, self.hidden_size)
self.bias_input_hidden = np.zeros((1, self.hidden_size))
self.weights_hidden_output = np.random.randn(self.hidden_size, self.output_size)
self.bias_hidden_output = np.zeros((1, self.output_size))

def sigmoid(self, x):


return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):


return x * (1 - x)

def forward_pass(self, X):


# Input layer to hidden layer
self.hidden_output = self.sigmoid(np.dot(X, self.weights_input_hidden) + self.bias_input_hidden)
# Hidden layer to output layer
self.output = self.sigmoid(np.dot(self.hidden_output, self.weights_hidden_output) +
self.bias_hidden_output)
return self.output

def backward_pass(self, X, y, learning_rate):


# Calculate error
error = y - self.output
# Backpropagate error
delta_output = error * self.sigmoid_derivative(self.output)
error_hidden = delta_output.dot(self.weights_hidden_output.T)
delta_hidden = error_hidden * self.sigmoid_derivative(self.hidden_output)
# Update weights and biases
self.weights_hidden_output += self.hidden_output.T.dot(delta_output) * learning_rate
self.bias_hidden_output += np.sum(delta_output, axis=0, keepdims=True) * learning_rate
self.weights_input_hidden += X.T.dot(delta_hidden) * learning_rate
self.bias_input_hidden += np.sum(delta_hidden, axis=0, keepdims=True) * learning_rate
def train(self, X, y, epochs, learning_rate):
for epoch in range(epochs):
output = self.forward_pass(X)
self.backward_pass(X, y, learning_rate)
if epoch % 1000 == 0:
print(f'Epoch {epoch}, Loss: {np.mean(np.square(y - output))}')

def predict(self, X):


return np.round(self.forward_pass(X))

# Example usage
if __name__ == "__main__":
# Example dataset for binary classification
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])

# Create and train neural network


neural_net = NeuralNetwork(input_size=2, hidden_size=4, output_size=1)
neural_net.train(X, y, epochs=10000, learning_rate=0.1)

# Predict
predictions = neural_net.predict(X)
print("Predictions:", predictions)

Output

Epoch 0, Loss: 0.26425993480491456


Epoch 1000, Loss: 0.054800730667645906
Epoch 2000, Loss: 0.02692402600064382
Epoch 3000, Loss: 0.01774875822052978
Epoch 4000, Loss: 0.0133426314148007
Epoch 5000, Loss: 0.010662377586673332
Epoch 6000, Loss: 0.008896243670285692
Epoch 7000, Loss: 0.007617961343066073
Epoch 8000, Loss: 0.006646409848399707
Epoch 9000, Loss: 0.005877345420022547
Predictions: [[0.]
[1.]
[1.]
[0.]]
Experiment 5

Objective: Write a program to implement the naïve Bayesian classifier and compute the accuracy
of the classifier

Solution

import pandas as pd
import numpy as np

# Define function to calculate prior probabilities


def calculate_prior_probabilities(y):
unique_classes, class_counts = np.unique(y, return_counts=True)
prior_probabilities = {}
total_samples = len(y)
for cls, count in zip(unique_classes, class_counts):
prior_probabilities[cls] = count / total_samples
return prior_probabilities

# Define function to calculate likelihoods


def calculate_likelihoods(X, y):
feature_likelihoods = {}
for feature in X.columns:
feature_likelihoods[feature] = {}
for cls in np.unique(y):
feature_likelihoods[feature][cls] = {}
for value in np.unique(X[feature]):
feature_likelihoods[feature][cls][value] = len(X[(X[feature] == value) & (y == cls)]) /
len(X[y == cls])
return feature_likelihoods

# Define function to make predictions


def predict(X, prior_probabilities, feature_likelihoods):
predictions = []
for _, sample in X.iterrows():
probabilities = {}
for cls in prior_probabilities:
probabilities[cls] = prior_probabilities[cls]
for feature in sample.index:
probabilities[cls] *= feature_likelihoods[feature][cls].get(sample[feature], 0) # Laplace
smoothing
predicted_class = max(probabilities, key=probabilities.get)
predictions.append(predicted_class)
return predictions

# Define function to calculate accuracy


def accuracy_score(y_true, y_pred):
correct_predictions = sum(y_true == y_pred)
total_samples = len(y_true)
accuracy = correct_predictions / total_samples
return accuracy
# Create sample training dataset
train_data = pd.DataFrame({
'Feature1': ['A', 'B', 'A', 'B', 'A', 'B'],
'Feature2': ['X', 'Y', 'X', 'Y', 'X', 'Y'],
'Target': ['Yes', 'No', 'Yes', 'No', 'Yes', 'Yes']
})

# Create sample test dataset


test_data = pd.DataFrame({
'Feature1': ['A', 'B', 'A'],
'Feature2': ['X', 'Y', 'X'],
'Target': ['Yes', 'No', 'Yes']
})

# Split features and target variable


X_train = train_data.drop(columns=['Target'])
y_train = train_data['Target']
X_test = test_data.drop(columns=['Target'])
y_test = test_data['Target']

# Calculate prior probabilities


prior_probabilities = calculate_prior_probabilities(y_train)

# Calculate likelihoods
feature_likelihoods = calculate_likelihoods(X_train, y_train)

# Make predictions
predictions = predict(X_test, prior_probabilities, feature_likelihoods)

# Compute accuracy
accuracy = accuracy_score(y_test, predictions)
print(f"Accuracy: {accuracy:.2f}")

Output

Accuracy: 1.00
Experiment 6

Objective: Assuming a set of documents that need to be classified, use the naïve Bayesian Classifier
model to perform this task.

Solution

from sklearn.datasets import fetch_20newsgroups


from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, precision_score, recall_score

# Load dataset
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
train_data = fetch_20newsgroups(subset='train', categories=categories, shuffle=True,
random_state=42)
test_data = fetch_20newsgroups(subset='test', categories=categories, shuffle=True, random_state=42)

# Convert text data to numerical features


vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(train_data.data)
X_test = vectorizer.transform(test_data.data)

# Train Naive Bayes classifier


clf = MultinomialNB()
clf.fit(X_train, train_data.target)

# Make predictions
y_pred = clf.predict(X_test)

# Calculate accuracy, precision, and recall


accuracy = accuracy_score(test_data.target, y_pred)
precision = precision_score(test_data.target, y_pred, average='weighted')
recall = recall_score(test_data.target, y_pred, average='weighted')

# Print results
print("Accuracy:", accuracy)
print("Precision:", precision)
print("Recall:", recall)

Output

Accuracy: 0.9340878828229028
Precision: 0.9347763292524094
Recall: 0.9340878828229028
Experiment 7

Objective: Write a program to construct a Bayesian network

Solution

import pandas as pd
from pgmpy.models import BayesianNetwork
from pgmpy.estimators import BayesianEstimator

# Define the dataset (you can replace this with your own dataset)
data = pd.DataFrame({
'A': [0, 1, 1, 0, 0, 1, 1, 0, 1, 0],
'B': [0, 0, 1, 1, 0, 1, 0, 1, 1, 0],
'C': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1],
'D': [0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
'Class': ['No', 'Yes', 'Yes', 'No', 'No', 'Yes', 'Yes', 'No', 'Yes', 'No']
})

# Define the structure of the Bayesian network


model = BayesianNetwork([
('A', 'Class'),
('B', 'Class'),
('C', 'Class'),
('D', 'Class')
])

# Fit the model to the data using Bayesian Estimation


model.fit(data, estimator=BayesianEstimator)

# Sample a test instance for classification


test_instance = pd.DataFrame({'A': [1], 'B': [0], 'C': [1], 'D': [0]})

# Predict the class using the Bayesian network


predicted_class = model.predict(test_instance)

# Print the predicted class


print("Predicted Class:", predicted_class['Class'].values[0])

Output

Predicted Class: Yes


Experiment 8

Objective: Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set
for clustering using k Means algorithm.

Solution

import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
import matplotlib.pyplot as plt

# Load the dataset from CSV file


data = pd.read_csv('/content/dataset.csv')

# Preprocessing the data if needed (e.g., scaling)

# Converting DataFrame to numpy array


X = data.values

# Number of clusters
k=3

# Apply k-Means clustering


kmeans = KMeans(n_clusters=k)
kmeans.fit(X)
kmeans_labels = kmeans.labels_

# Apply EM algorithm (Gaussian Mixture Model)


em = GaussianMixture(n_components=k)
em.fit(X)
em_labels = em.predict(X)

# Visualize the results


plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.scatter(X[:, 0], X[:, 1], c=kmeans_labels, cmap='viridis', edgecolor='k')
plt.title('k-Means Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.colorbar()

plt.subplot(1, 2, 2)
plt.scatter(X[:, 0], X[:, 1], c=em_labels, cmap='viridis', edgecolor='k')
plt.title('EM (GMM) Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.colorbar()
plt.tight_layout()
plt.show()

Output
Experiment 9

Objective: Write a program to implement k-Nearest Neighbour algorithm to classify the iris data
set. Print both correct and wrong predictions.

Solution

from sklearn.datasets import load_iris


from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score

# Load the iris dataset


iris = load_iris()
X = iris.data
y = iris.target

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define the k-NN classifier


knn = KNeighborsClassifier(n_neighbors=3)

# Train the classifier


knn.fit(X_train, y_train)

# Make predictions on the test set


y_pred = knn.predict(X_test)

# Print correct and wrong predictions


correct_predictions = 0
wrong_predictions = 0
for i in range(len(y_test)):
if y_pred[i] == y_test[i]:
print(f"Correct Prediction: Predicted class - {y_pred[i]}, Actual class - {y_test[i]}")
correct_predictions += 1
else:
print(f"Wrong Prediction: Predicted class - {y_pred[i]}, Actual class - {y_test[i]}")
wrong_predictions += 1

# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("\nAccuracy:", accuracy)
print("Number of correct predictions:", correct_predictions)
print("Number of wrong predictions:", wrong_predictions)
Output

Correct Prediction: Predicted class - 1, Actual class - 1


Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 1, Actual class - 1
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 2, Actual class - 2
Correct Prediction: Predicted class - 0, Actual class - 0
Correct Prediction: Predicted class - 0, Actual class - 0

Accuracy: 1.0
Number of correct predictions: 30
Number of wrong predictions: 0
Experiment 10

Objective: Implement the non-parametric Locally Weighted Regression algorithm in order to fit
data points. Select appropriate data set for your experiment and draw graphs

Solution

import numpy as np
import matplotlib.pyplot as plt

def locally_weighted_regression(x, y, query_point, tau):


x = np.array(x)
y = np.array(y)
query_point = np.array(query_point)

m = x.shape[0]
weights = np.exp(-((x - query_point) ** 2) / (2 * tau * tau))
W = np.diag(weights)

X = np.ones((m, 2))
X[:, 1] = x

XTWX = np.dot(X.T, np.dot(W, X))


theta = np.dot(np.linalg.inv(XTWX), np.dot(X.T, np.dot(W, y)))

query_point_vec = np.array([1, query_point])

return np.dot(query_point_vec, theta)

np.random.seed(0)
X = np.linspace(0, 10, 100)
y = np.sin(X) + np.random.normal(0, 0.1, 100)

# Fitting data using LWR


query_points = np.linspace(0, 10, 100)
tau = 0.1 # Bandwidth parameter

predictions = [locally_weighted_regression(X, y, query_point, tau) for query_point in query_points]

# Plotting the results


plt.figure(figsize=(10, 6))
plt.scatter(X, y, color='blue', label='Original Data')
plt.plot(query_points, predictions, color='red', label='LWR Fit')
plt.title('Locally Weighted Regression')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.show()
Output

You might also like