0% found this document useful (0 votes)
3 views

machine Learning

The document contains a series of programming exercises that demonstrate various machine learning techniques using Python. Each program utilizes different datasets and algorithms, including binary classification with make_circles and make_moons, clustering with make_blobs, decision trees with ID3, Q-learning, neural networks, and locally weighted regression. The examples include code snippets and visualizations to illustrate the results of each implementation.

Uploaded by

ririn79475
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

machine Learning

The document contains a series of programming exercises that demonstrate various machine learning techniques using Python. Each program utilizes different datasets and algorithms, including binary classification with make_circles and make_moons, clustering with make_blobs, decision trees with ID3, Q-learning, neural networks, and locally weighted regression. The examples include code snippets and visualizations to illustrate the results of each implementation.

Uploaded by

ririn79475
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Program – 1

▪ Write a program using 2d binary classification data generated by


make_circles() have a spherical decision boundary.

# Import necessary libraries

from sklearn.datasets import make_circles

import matplotlib.pyplot as plt

# Generate 2d classification dataset

X, y = make_circles(n_samples=200, shuffle=True,

noise=0.1, random_state=42)

# Plot the generated datasets

plt.scatter(X[:, 0], X[:, 1], c=y)

plt.show()

Page | 1
Output :

Page | 2
Program – 2
▪ Write a program for Two interlocking half circles represent the 2d binary
classification data produced by the make_moons() function.

#import the necessary libraries

from sklearn.datasets import make_moons

import matplotlib.pyplot as plt

# generate 2d classification dataset

X, y = make_moons(n_samples=500, shuffle=True,

noise=0.15, random_state=42)

# Plot the generated datasets

plt.scatter(X[:, 0], X[:, 1], c=y)

plt.show()

Page | 3
Output :

Page | 4
Program – 3

▪ Write a program to generate the data by using the function make_blobs()


are blobs that can be utilized for clustering.

#import the necessary libraries

from sklearn.datasets import make_blobs

import matplotlib.pyplot as plt

# Generate 2d classification dataset

X, y = make_blobs(n_samples=500, centers=3, n_features=2, random_state=23)

# Plot the generated datasets

plt.scatter(X[:, 0], X[:, 1], c=y)

plt.show()

Page | 5
Output :

Page | 6
Program – 4

▪ Write a program to generate data by the function make_classification()


need to balance between n_informative, n_redundant and n_classes
attributes X[:, :n_informative + n_redundant + n_repeated]

#import the necessary libraries


from sklearn.datasets import make_classification
import matplotlib.pyplot as plt
# generate 2d classification dataset
X, y = make_classification(n_samples = 100,
n_features=2,
n_redundant=0,
n_informative=2,
n_repeated=0,
n_classes =3,
n_clusters_per_class=1)
# Plot the generated datasets
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.show()

Page | 7
Output :

Page | 8
Program – 5
▪ Write a Program for the implementation of Q-Learning (Reinforcement
Learning)

import numpy as np
import pylab as pl
import networkx as nx
edges = [(0, 1), (1, 5), (5, 6), (5, 4), (1, 2),
(1, 3), (9, 10), (2, 4), (0, 6), (6, 7),
(8, 9), (7, 8), (1, 7), (3, 9)]

goal = 10
G = nx.Graph()
G.add_edges_from(edges)
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos)
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(G, pos)
pl.show()

Page | 9
Output :

Page | 10
Program – 6
▪ Write a program to demonstrate the working of the decision tree
based ID3 algorithm. Use an appropriate data set for building the
decision tree and apply this knowledge to classify a new sample.

# Import necessary libraries

from sklearn import datasets

from sklearn.model_selection import train_test_split

from sklearn.tree import DecisionTreeClassifier

from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

# Load a dataset (for example, using the iris dataset)

iris = datasets.load_iris()

X = iris.data

y = iris.target

# Split the data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Create the decision tree classifier using ID3 algorithm (information gain)

clf = DecisionTreeClassifier(criterion="entropy")

# Train the classifier using the training data

clf.fit(X_train, y_train)

# Make predictions on the testing data

y_pred = clf.predict(X_test)

# Evaluate the performance of the classifier

print("Accuracy:", accuracy_score(y_test, y_pred))

print("\nConfusion Matrix:\n", confusion_matrix(y_test, y_pred))

print("\nClassification Report:\n", classification_report(y_test, y_pred))

Page | 11
# Now, let's classify a new sample

new_sample = [[5.1, 3.5, 1.4, 0.2]] # example values for the new sample

predicted_class = clf.predict(new_sample)

print("\nPredicted class for new sample:", predicted_class)

Page | 12
output :

Page | 13
Program – 7
▪ Build an Artificial Neural Network by implementing the Backpropagation
algorithm and test the same using appropriate data sets.

import numpy as np

class NeuralNetwork:

def __init__(self, input_size, hidden_size, output_size):

self.input_size = input_size

self.hidden_size = hidden_size

self.output_size = output_size

self.learning_rate = 0.1

self.weights_input_hidden=np.random.rand(self.input_size
, self.hidden_size)

self.weights_hidden_output=np.random.rand(self.hidden_s
ize, self.output_size)

def sigmoid(self, x):

return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):

return x * (1 - x)

def forward_propagation(self, input_data):

self.hidden_input = np.dot(input_data, self.weights_input_hidden)

self.hidden_output = self.sigmoid(self.hidden_input)

self.output = np.dot(self.hidden_output,
self.weights_hidden_output)

return self.output

def backward_propagation(self, input_data, target):

error = target - self.output

d_output = error * self.sigmoid_derivative(self.output)

error_hidden = d_output.dot(self.weights_hidden_output.T)

d_hidden = error_hidden *
self.sigmoid_derivative(self.hidden_output)

Page | 14
self.weights_hidden_output += self.hidden_output.T.dot(d_output) *
self.learning_rate

self.weights_input_hidden += input_data.T.dot(d_hidden) *
self.learning_rate

def train(self, input_data, target, epochs):

for _ in range(epochs):

output = self.forward_propagation(input_data)

self.backward_propagation(input_data, target)

def predict(self, input_data):

return self.forward_propagation(input_data)

# Example usage

input_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

target = np.array([[0], [1], [1], [0]])

# Create a neural network with 2 input nodes, 2 hidden nodes, and 1 output
node

nn = NeuralNetwork(2, 2, 1)

# Train the neural network

nn.train(input_data, target, epochs=10000)

# Test the neural network

print("Predictions:")

for i in range(len(input_data)):

print(f"Input: {input_data[i]},

Predicted Output: {nn.predict(input_data[i])}")

Page | 15
Output :

Page | 16
Program – 8
▪ Implement the non-parametric Locally Weighted Regression
algorithm in order to fit datapoints. Select appropriate data set for
your experiment and draw graphs.
import numpy as np

import matplotlib.pyplot as plt

# Generate a sample dataset

np.random.seed(0)

X = np.linspace(0, 10, 100)

y = 2 * X + 1 + np.random.normal(scale=2, size=100)

# Locally Weighted Regression function

def lowess(x, y, tau=0.1):

n = len(x)

y_hat = np.zeros(n)

for i in range(n):

weights = np.exp(-0.5 * ((x - x[i]) / tau) ** 2)

b = np.array([np.sum(weights * y), np.sum(weights * y * x)])

A = np.array([[np.sum(weights), np.sum(weights * x)],

[np.sum(weights * x), np.sum(weights * x * x)]])

theta = np.linalg.solve(A, b)

y_hat[i] = theta[0] + theta[1] * x[i]

return y_hat

# Fit the data using Locally Weighted Regression

tau = 0.3 # Bandwidth parameter

y_lowess = lowess(X, y, tau)

# Plot the original data and the fitted curve

plt.scatter(X, y, label='Original data', color='b')

plt.plot(X, y_lowess, label=f’LOWESS (tau={tau})', color='r')

plt.legend()

plt.show()

Page | 17
Output :

Page | 18

You might also like