machine Learning
machine Learning
X, y = make_circles(n_samples=200, shuffle=True,
noise=0.1, random_state=42)
plt.show()
Page | 1
Output :
Page | 2
Program – 2
▪ Write a program for Two interlocking half circles represent the 2d binary
classification data produced by the make_moons() function.
X, y = make_moons(n_samples=500, shuffle=True,
noise=0.15, random_state=42)
plt.show()
Page | 3
Output :
Page | 4
Program – 3
plt.show()
Page | 5
Output :
Page | 6
Program – 4
Page | 7
Output :
Page | 8
Program – 5
▪ Write a Program for the implementation of Q-Learning (Reinforcement
Learning)
import numpy as np
import pylab as pl
import networkx as nx
edges = [(0, 1), (1, 5), (5, 6), (5, 4), (1, 2),
(1, 3), (9, 10), (2, 4), (0, 6), (6, 7),
(8, 9), (7, 8), (1, 7), (3, 9)]
goal = 10
G = nx.Graph()
G.add_edges_from(edges)
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos)
nx.draw_networkx_edges(G, pos)
nx.draw_networkx_labels(G, pos)
pl.show()
Page | 9
Output :
Page | 10
Program – 6
▪ Write a program to demonstrate the working of the decision tree
based ID3 algorithm. Use an appropriate data set for building the
decision tree and apply this knowledge to classify a new sample.
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Create the decision tree classifier using ID3 algorithm (information gain)
clf = DecisionTreeClassifier(criterion="entropy")
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
Page | 11
# Now, let's classify a new sample
new_sample = [[5.1, 3.5, 1.4, 0.2]] # example values for the new sample
predicted_class = clf.predict(new_sample)
Page | 12
output :
Page | 13
Program – 7
▪ Build an Artificial Neural Network by implementing the Backpropagation
algorithm and test the same using appropriate data sets.
import numpy as np
class NeuralNetwork:
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.learning_rate = 0.1
self.weights_input_hidden=np.random.rand(self.input_size
, self.hidden_size)
self.weights_hidden_output=np.random.rand(self.hidden_s
ize, self.output_size)
return 1 / (1 + np.exp(-x))
return x * (1 - x)
self.hidden_output = self.sigmoid(self.hidden_input)
self.output = np.dot(self.hidden_output,
self.weights_hidden_output)
return self.output
error_hidden = d_output.dot(self.weights_hidden_output.T)
d_hidden = error_hidden *
self.sigmoid_derivative(self.hidden_output)
Page | 14
self.weights_hidden_output += self.hidden_output.T.dot(d_output) *
self.learning_rate
self.weights_input_hidden += input_data.T.dot(d_hidden) *
self.learning_rate
for _ in range(epochs):
output = self.forward_propagation(input_data)
self.backward_propagation(input_data, target)
return self.forward_propagation(input_data)
# Example usage
# Create a neural network with 2 input nodes, 2 hidden nodes, and 1 output
node
nn = NeuralNetwork(2, 2, 1)
print("Predictions:")
for i in range(len(input_data)):
print(f"Input: {input_data[i]},
Page | 15
Output :
Page | 16
Program – 8
▪ Implement the non-parametric Locally Weighted Regression
algorithm in order to fit datapoints. Select appropriate data set for
your experiment and draw graphs.
import numpy as np
np.random.seed(0)
y = 2 * X + 1 + np.random.normal(scale=2, size=100)
n = len(x)
y_hat = np.zeros(n)
for i in range(n):
theta = np.linalg.solve(A, b)
return y_hat
plt.legend()
plt.show()
Page | 17
Output :
Page | 18