Aimll T
Aimll T
AIM:
To implement Breadth First Search (BFS) algorithm using python.
THEORY:
Breadth-First Search (BFS) is an algorithm used for traversing graphs or trees. Traversing means visiting each
node of the graph. Breadth-First Search is a recursive algorithm to search all the vertices of a graph or a tree.
BFS in python can be implemented by using data structures like a dictionary and lists. Breadth-First Search in
tree and graph is almost the same. The only difference is that the graph may contain cycles, so we may
traverse to the same node again.
ALGORITHM:
1. Pick any node, visit the adjacent unvisited vertex, mark it as visited, display it, and insert it in a queue.
2. If there are no remaining adjacent vertices left, remove the first vertex from the queue. 3. Repeat step 1 and
EXAMPLE:
Let us use an undirected graph with 5 vertices.
From the vertex P, the BFS algorithmic program starts by putting it within the Visited list and puts all its
adjacent vertices within the stack.
REGISTER NO:420422104078
Next, we have a tendency to visit the part at the front of the queue i.e. Q and visit its adjacent nodes. Since P
has already been visited, we have a tendency to visit R instead.
Vertex R has an unvisited adjacent vertex in T, thus we have a tendency to add that to the rear of the queue
and visit S, which is at the front of the queue.
Now, only T remains within the queue since the only adjacent node of S i.e. P is already visited. We have a
tendency to visit it.
Since the queue is empty, we've completed the Traversal of the graph.
PROGRAM:
REGISTER NO:420422104078
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : ['F'],
'D' : [],
'E' : ['F'],
'F' : []
}
visited = [] # List to keep track of visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node):
visited.append(node)
queue.append(node)
while queue:
s = queue.pop(0)
print s,
for neighbour in graph[s]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
bfs(visited, graph, 'A')
OUTPUT:
A B C D E F
RESULT:
Thus the program to implement BFS algorithm using python is executed and verified successfully.
AIM:
the current path until all the unvisited nodes are traversed after which subsequent paths are going to be
selected.
ALGORITHM:
1. Pick any node. If it is unvisited, mark it as visited and recur on all its adjacent nodes. 2. Repeat until all the
EXAMPLE:
From the vertex P, the DFS rule starts by putting it within the Visited list and putting all its adjacent vertices
within the stack.
Next, we tend to visit the part at the highest of the stack i.e. Q, and head to its adjacent nodes. Since P has
already been visited, we tend to visit R instead.
REGISTER NO:420422104078
Vertex R has the unvisited adjacent vertex in T, therefore we will be adding that to the highest of the stack and
visit it.
At last, we will visit the last component S, it does not have any unvisited adjacent nodes, thus we've
completed the Depth First Traversal of the graph.
PROGRAM:
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : ['F'],
'D' : [],
'E' : ['F'],
'F' : []
}
visited = set() # Set to keep track of visited nodes.
def dfs(visited, graph, node):
if node not in visited:
print node
visited.add(node)
for neighbour in graph[node]:
REGISTER NO:420422104078
OUTPUT:
A
B
D
E
F
C
RESULT:
Thus the program to implement Depth First Search algorithm using python is executed and verified
successfully.
AIM:
ALGORITHM:
2. For all the neighbouring nodes, find the least cost F node
3. Switch to the closed list
▪ For 8 nodes adjacent to the current node
▪ If the node is not reachable, ignore it. Else
▪ If the node is not on the open list, move it to the open list and calculate f, g,h. ▪ If the node is on the open list,
check if the path it offers is less than the current path and change to it if it does so.
4. Stop working when
▪ You find the destination
▪ You cannot find the destination going through all possible points.
PROGRAM:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
def heuristic(n):
H_dist = {
'A': 10,
'B': 8,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
REGISTER NO:420422104078
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
}
aStarAlgo('A', 'J')
OUTPUT:
RESULT:
Thus the program to implement A* algorithm using python is executed and verified successfully.
AIM:
ALGORITHM:
state with the lowest f-score from open_set. This will be the current state.
b. Check if the goal has been reached by calling goal_test(current_state). If so, return the final path by
calling construct_path(parents, current_state).
d. Expand the current state by generating its successor states using the
i. Calculate the tentative g-score by adding the successor cost to the current state's g-score: tentative_g_score
= g_scores[current_state] + successor_cost.
ii. Check if the successor state is already in closed_set. If so, skip this successor state and move on to the next
one.
iii. If the successor state is not in g_scores, or if the tentative g-score is less than the current g score for the
successor state, update its g_scores, f_scores, and parents as follows:
- `g_scores[successor_state] = tentative_g_score`
- `parents[successor_state] = current_state`
If the successor state is not already in `open_set`, add it to `open_set` with `(f_scores[successor_state],
successor_state)`.
3. To construct the final path, start with the goal state and repeatedly follow its parent pointers until the start
state is reached. Return the path as a list of states in order from start to goal.
PROGRAM:
REGISTER NO:420422104078
import heapq
def sma_star(start_state, goal_test, successors, heuristic, suboptimality, max_expansions): # Initialize data
structures
f_scores = {start_state: heuristic(start_state)}
g_scores = {start_state: 0}
parents = {start_state: None}
closed_set = set()
open_set =[]
heapq.heappush(open_set,(f_scores[start_state], start_state))
expansions = 0
while open_set and expansions < max_expansions:
# Pop the state with the lowest f-score from the open set
current_state = heapq.heappop(open_set)[1]
# Check if the goal has been reached
if goal_test(current_state):
return construct_path(parents, current_state)
closed_set.add(current_state)
expansions += 1
# Expand the current state
for successor in successors(current_state):
successor_state,successor_cost = successor
# Calculate tentative g-score
tentative_g_score = g_scores[current_state] + successor_cost
# Check if the successor is already closed
if successor_state in closed_set:
continue
# Add the successor to the open set if it's new
if successor_state not in g_scores or tentative_g_score < g_scores[successor_state]: parents[successor_state]
= current_state
g_scores[successor_state] = tentative_g_score
f_scores[successor_state] = g_scores[successor_state] + suboptimality
*heuristic(successor_state)
defsuccessors(state):
if state == 1:
return [(2, 1), (3, 2)]
elif state == 2:
return [(4, 3), (5, 4)]
elif state == 3:
return [(4, 1), (5, 2)]
REGISTER NO:420422104078
else:
return []
def heuristic(state):
return abs(state - 5)
start_state = 1
suboptimality = 2
max_expansions = 10
path = sma_star(start_state, goal_test, successors, heuristic, suboptimality, max_expansions) print(path)
OUTPUT:
[1, 3, 5]
RESULT:
Thus the program to implement SMA A* algorithm using python is executed and verified successfully.
AIM:
ALGORITHM:
1. Load the dataset: Load the dataset that you want to classify using Naive Bayes algorithm. The dataset
should have labeled data points with attributes and their corresponding classes.
2.Split the dataset: Split the dataset into training and testing sets. Use the training set to train the Naive Bayes
REGISTER NO:420422104078
3.Preprocess the data: Preprocess the data by removing any irrelevant or noisy attributes, cleaning the text
data (if applicable), and converting the data into numerical form.
4.Compute the prior probabilities: Calculate the prior probabilities of each class by dividing the number of
training instances in each class by the total number of training instances.
5.Compute the likelihood probabilities: Compute the likelihood probabilities of each attribute given each class
by calculating the conditional probabilities of each attribute given each class.
6.Apply Bayes theorem: Apply Bayes theorem to compute the posterior probabilities of each class given the
attributes of a test instance. Choose the class with the highest posterior probability as the predicted class for
the test instance.
7.Evaluate the model: Evaluate the performance of the Naive Bayes classifier on the testing set using metrics
such as accuracy, precision, recall, and F1 score.
8.Tune the hyperparameters: Tune the hyperparameters of the Naive Bayes classifier to improve its
performance on the testing set.
9.Deploy the model: Deploy the Naive Bayes classifier in a real-world application to classify new instances.
PROGRAM:
import numpy as np
class NaiveBayes:
def init (self):
self.prior = None
self.likelihood = None
self.classes = None
self.prior = np.zeros(n_classes)
self.likelihood = np.zeros((n_classes, n_features))
for i, c in enumerate(self.classes):
X_c = X[y==c]
self.prior[i] = X_c.shape[0] / X.shape[0]
self.likelihood[i, :] = ((X_c.sum(axis=0)) / X_c.sum()).flatten()
def predict(self, X):
y_pred = []
for x in X:
posterior = []
for i, c in enumerate(self.classes):
likelihood = np.prod(self.likelihood[i, :] ** x)
posterior.append(self.prior[i] * likelihood)
y_pred.append(self.classes[np.argmax(posterior)])
return y_pred
REGISTER NO:420422104078
OUTPUT:
('Accuracy:', 0.3333333333333333)
RESULT:
Thus the program to implement naïve bayes models using python is executed and verified successfully.
ALGORITHM:
1. Import the necessary libraries and classes from pgmpy, which is a Python library for working with
probabilistic graphical models.
2. Define the structure of our Bayesian network by creating a new BayesianNetwork object and specifying the
edges between nodes using a list of tuples. In this case, we have two nodes B and E that each point to a third
node A.
REGISTER NO:420422104078
3. Create conditional probability tables (CPDs) for each node using the TabularCPD class. The first argument
is the name of the node, the second argument is the number of states it can be in (in this case, 2 for binary
variables), and the third argument is the actual table of probabilities.
7. Use the VariableElimination class from pgmpy.inference to infer the posterior probability distribution of A
given evidence of B=1 and E=0.
PROGRAM:
inference = VariableElimination(model)
posterior_a = inference.query(['A'], evidence={'B': 1, 'E': 0})
print(posterior_a)
OUTPUT:
REGISTER NO:420422104078
RESULT:
Thus the program to implement Naïve Bayes models using python is executed and verified successfully.
REGISTER NO:420422104078
AIM:
ALGORITHM:
2. The program generates sample data for three features (feature_1, feature_2, and feature_3) and a target
variable using numpy.random.rand() and adds some random noise to the target variable.
3. A pandas DataFrame is created using the generated sample data, and the DataFrame is saved to a CSV file
using to_csv() function.
5. The features and target variables are defined by extracting the appropriate columns from the DataFrame
and assigning them to X and y, respectively.
7. A multiple regression model is created in the same way, but this time including all three features.
9. The data and three regression models are plotted using matplotlib.pyplot. The scatter plot shows the
relationship between feature_1 and target, and the regression lines represent the predictions of the linear,
multiple, and polynomial regression models. The plot also includes a legend to differentiate between the
different regression models.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
OUTPUT:
import numpy as np
from sklearn.linear_model import LinearRegression
# Generate some random data
x = np.array([[1, 2, 3], [2, 4, 6], [3, 6, 9], [4, 8, 12]])
y = np.array([6, 12, 18, 24])
# Create a linear regression object
model = LinearRegression()
# Fit the model to the data
model.fit(x, y)
# Predict the output for new inputs
x_new = np.array([[5, 10, 15]])
y_new = model.predict(x_new)
print(y_new) # Output: [30.]
# Print the coefficients and intercept
print('Coefficients:', model.coef_)
print('Intercept:', model.intercept_)
OUTPUT:
[30.]
Coefficients: [0.42857143 0.85714286 1.28571429]
Intercept: 7.105427357601002e-1
POLYNOMIAL REGRESSION:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
# Generate some random data
x = np.array([1, 2, 3, 4, 5]).reshape((-1, 1))
y = np.array([1, 3, 8, 13, 20])
# Create a polynomial features object
poly = PolynomialFeatures(degree=2)
x_poly = poly.fit_transform(x)
# Create a linear regression object
REGISTER NO:420422104078
model = LinearRegression()
# Fit the model to the data
model.fit(x_poly, y)
# Predict the output for new inputs
x_new = np.array([[6]])
x_new_poly = poly.transform(x_new)
y_new = model.predict(x_new_poly)
print(y_new) # Output: [31.]
OUTPUT:
RESULT:
Thus the program to implement regression models using python is executed and verified successfully.
AIM:
ALGORITHM:
3. Convert the dataset into a pandas dataframe and split into features (X) and target (y). 4. Split the data into
5. Create a decision tree classifier object with max_depth=3 and random_state=42 using
DecisionTreeClassifier from sklearn.tree.
6. Fit the decision tree model to the training data using the fit() method.
7. Evaluate the decision tree model on the test data by making predictions using the predict() method, and
then calculating the accuracy score using accuracy_score from sklearn.metrics.
8. Create a random forest classifier object with n_estimators=100, max_depth=3, and random_state=42 using
RandomForestClassifier from sklearn.ensemble.
9. Fit the random forest model to the training data using the fit() method.
10. Evaluate the random forest model on the test data by making predictions using the predict() method, and
then calculating the accuracy score using accuracy_score from sklearn.metrics.
11. Print the accuracy scores of both the decision tree and random forest models.
PROGRAM:
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
OUTPUT:
('Decision Tree Accuracy:', 1.0)
('Random Forest Accuracy:', 1.0)
RESULT:
Thus the program to implement decision trees and random forests models using python is executed and
verified successfully.
ALGORITHMS:
PROGRAMS:
grid.fit(X_train, y_train)
# best hyperparameters
print("Best hyperparameters: ", grid.best_params_)
# create SVM model with best hyperparameters
svm = SVC(C=grid.best_params_['C'], gamma=grid.best_params_['gamma'],
kernel=grid.best_params_['kernel'])
# fit SVM model to training data
svm.fit(X_train, y_train)
# predict classes of test set
y_pred = svm.predict(X_test)
# calculate accuracy score and confusion matrix of SVM model
accuracy = accuracy_score(y_test, y_pred)
confusion = confusion_matrix(y_test, y_pred)
print("Accuracy: ", accuracy)
print("Confusion matrix:\n", confusion)
OUTPUT:
Fitting 3 folds for each of 48 candidates, totalling 144 fits [CV] kernel=linear, C=0.1,
gamma=0.1 ................................. [CV] kernel=linear, C=0.1, gamma=0.1,
score=0.975609756098, total= 0.0s [CV] kernel=linear, C=0.1, gamma=0.1
................................. [CV] ....... kernel=linear, C=0.1, gamma=0.1,
REGISTER NO:420422104078
RESULT:
Thus the program to implement svm models using python is executed and verified Successfully.
AIM:
ALGORITHMS:
This generates a random dataset with 1000 samples, 10 features, and 2 classes, with a random seed of 42.
3. Split the data into training and testing sets.
This splits the data into training and testing sets, with a testing set size of 20% and a random seed of 42.
4. Create a Random Forest model.
This creates a Random Forest model with 100 trees and a random seed of 42, and fits it to the training data.
5. Create a Gradient Boosting model.
This creates a Gradient Boosting model with 100 trees and a random seed of 42, and fits it to the training data.
6. Predict the classes of the test set using both models.
This predicts the classes of the test set using both the Random Forest and Gradient Boosting models.
7. Combine the predictions using a majority vote.
This combines the predictions of both models using a majority vote. For each sample in the test set, it creates a
list of predictions from both models, and takes the prediction that occurs most frequently.
8. Calculate the accuracy score of the ensemble model.
This calculates the accuracy score of the ensemble model by comparing the combined predictions to the true
labels of the test set, and prints the result.
PROGRAMS:
OUTPUT:
RESULT:
Thus the program to implement ensembling techniques using python is executed and verified successfully.
AIM:
To implement clustering algorithms
ALGORITHMS:
1. Import the required libraries
2. Generate random data using the make_blobs function from scikit-learn, with 500 samples, 4 centers, a
standard deviation of 1.0, and a random seed of 0
REGISTER NO:420422104078
3. Standardize the data using the StandardScaler function from scikit-learn 4. Define a range of clusters to
evaluate, from 2 to 9
5. Initialize empty lists to store the evaluation metrics
6. Loop through the range of clusters, and for each number of clusters
7. Initialize the K-Means model and fit the standardized data
8. Get the cluster labels and centroids from the fitted model
9. Calculate the sum of squared distances (SSE) and silhouette score for the current number of clusters
10. Visualize the clustering results using a scatter plot of the data points with cluster assignments indicated by
color, and the cluster centroids indicated by red crosses 11. After all iterations of the loop, plot the SSE and
silhouette scores for each number of clusters using subplots
12. Display the final plots to the user.
PROGRAMS:
import numpy as np
import matplotlib.pyplot as plt
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
silhouette = []
# Evaluate different number of clusters
for k in range_clusters:
# Initialize K-Means model and fit data
kmeans.fit(X_std)
# Get cluster labels and centroids
REGISTER NO:420422104078
labels = kmeans.labels_
centroids = kmeans.cluster_centers_
# Calculate SSE and Silhouette score
sse.append(kmeans.inertia_)
silhouette.append(silhouette_score(X_std, labels))
# Visualize clustering results
plt.show()
ax1.set_title('Elbow Method')
ax2.plot(range_clusters,silhouette, 'bo-')
ax2.set_xlabel('Number of clusters')
ax2.set_ylabel('Silhouette score')
ax2.set_title('Silhouette Method')
plt.show()
OUTPUT:
REGISTER NO:420422104078
RESULT:
Thus the program to implement clustering using python is executed and verified successfully.
REGISTER NO:420422104078
AIM:
To write a python code to implement expectation maximization (EM) algorithm.
ALGORITHM:
1. Import the required packages.
2. Generate and plot the cluster model.
3. Make an initial guess of parameter θ using random function.
4. Given the current estimates for θ, in the expectation step EM computes the cluster posterior probabilities
P(Ci |xj ) via the Bayes theorem.
5.
In the maximization step, using the weights P(Ci |xj ) EM re-estimates θ, that is, it
re estimates the parameters for each cluster.
6. Repeat the step 4 & 5 until it converges.
PROGRAM:
import random
import numpy as np # import numpy
from numpy.linalg import inv # for matrix inverse
import matplotlib.pyplot as plt # import matplotlib.pyplot for plotting framework
from scipy.stats import multivariate_normal # for generating pdf m1 = [1,1] #
consider a random mean and covariance value
m2 = [7,7]
cov1 = [[3, 2], [2, 3]]
cov2 = [[2, -1], [-1, 2]]
x = np.random.multivariate_normal(m1, cov1, size=(200,)) # Generating 200 samples for each
mean and covariance
y = np.random.multivariate_normal(m2, cov2, size=(200,))
d = np.concatenate((x, y), axis=0)
plt.figure(figsize=(10,10))
plt.scatter(d[:,0], d[:,1], marker='o')
plt.axis('equal')
plt.xlabel('X-Axis', fontsize=16)
plt.ylabel('Y-Axis', fontsize=16)
plt.title('Ground Truth', fontsize=22)
plt.grid()
plt.show()
m1 = random.choice(d)
m2 = random.choice(d)
cov1 = np.cov(np.transpose(d))
cov2 = np.cov(np.transpose(d))
REGISTER NO:420422104078
pi = 0.5
x1 = np.linspace(-4,11,200)
x2 = np.linspace(-4,11,200)
X, Y = np.meshgrid(x1,x2)
Z1 = multivariate_normal(m1, cov1)
Z2 = multivariate_normal(m2, cov2)
pos = np.empty(X.shape + (2,)) # a new array of given shape and type, without initializing
entries pos[:, :, 0] = X; pos[:, :, 1] = Y
plt.figure(figsize=(10,10)) # creating the figure and assigning the size
plt.scatter(d[:,0], d[:,1], marker='o')
plt.contour(X, Y, Z1.pdf(pos), colors="r" ,alpha = 0.5)
plt.contour(X, Y, Z2.pdf(pos), colors="b" ,alpha = 0.5)
plt.axis('equal') # making both the axis equal
plt.xlabel('X-Axis', fontsize=16) # X-Axis
plt.ylabel('Y-Axis', fontsize=16) # Y-Axis
plt.title('Initial State', fontsize=22) # Title of the plot
plt.grid() # displaying gridlines
plt.show() ##Expectation step
def Estep(lis1):
m1=lis1[0]
m2=lis1[1]
cov1=lis1[2]
cov2=lis1[3]
pi=lis1[4]
for i in range(0,len(d)):
num_mu1 += (1-eval1[i]) * d[i]
din_mu1 += (1-eval1[i])
mu1 = num_mu1/din_mu1
mu2 = num_mu2/din_mu2
num_s1,din_s1,num_s2,din_s2=0,0,0,0
for i in range(0,len(d)):
q1 = np.matrix(d[i]-mu1)
num_s1 += (1-eval1[i]) * np.dot(q1.T, q1)
REGISTER NO:420422104078
din_s1 += (1-eval1[i])
q2 = np.matrix(d[i]-mu2)
num_s2 += eval1[i] * np.dot(q2.T, q2)
din_s2 += eval1[i]
s1 = num_s1/din_s1
s2 = num_s2/din_s2
pi = sum(eval1)/len(d)
lis2=[mu1,mu2,s1,s2,pi]
return(lis2)
def plot(lis1):
mu1=lis1[0]
mu2=lis1[1]
s1=lis1[2]
s2=lis1[3]
Z1 = multivariate_normal(mu1, s1)
Z2 = multivariate_normal(mu2, s2)
pos = np.empty(X.shape + (2,)) # a new array of given shape and type, without initializing
entries
pos[:, :, 0] = X; pos[:, :, 1] = Y
mu2=lis1[1]
s1=lis1[2]
s2=lis1[3]
Z1 = multivariate_normal(mu1, s1)
Z2 = multivariate_normal(mu2, s2)
pos = np.empty(X.shape + (2,))
# a new array of given shape and type, without initializing entries
pos[:, :, 0] = X; pos[:, :, 1] = Y
plt.figure(figsize=(10,10)) # creating the figure and assigning the size plt.scatter(d[:,0], d[:,1],
marker='o')
plt.contour(X, Y, Z1.pdf(pos), colors="r" ,alpha = 0.5)
plt.contour(X, Y, Z2.pdf(pos), colors="b" ,alpha = 0.5)
plt.axis('equal') # making both the axis equal plt.xlabel('X-Axis', fontsize=16) # X-Axis
plt.ylabel('Y-Axis', fontsize=16) # Y-Axis plt.grid() # displaying gridlines plt.show()
iterations = 20
lis1=[m1,m2,cov1,cov2,pi]
for i in range(0,iterations):
lis2 = Mstep(Estep(lis1))
lis1=lis2
if(i==0 or i == 4 or i == 9 or i == 14 or i == 19):
plot(lis1)
OUTPUT:
REGISTER NO:420422104078
REGISTER NO:420422104078
REGISTER NO:420422104078
RESULT:
Thus the python program for expectation maximization on clustering is written and executed
successfully.
REGISTER NO:420422104078
AIM:
To write a python program to build simple neural networks.
ALGORITHM:
1. Import the libraries. For example: import numpy as np
2. Define/create input data. For example, use numpy to create a dataset and an array of data
values.
3. Add weights and bias (if applicable) to input features. These are learnable parameters,
meaning that they can be adjusted during training.
∙ Weights = input parameters that influences output
∙ Bias = an extra threshold value added to the output
4. Train the network against known, good data in order to find the correct values for the
weights and biases.
5. Test the Network against a set of test data to see how it performs.
PROGRAM:
import numpy as np
class NeuralNetwork():
def init (self):
# seeding for random number generation
np.random.seed(1)
#converting weights to a 3 by 1 matrix with values from -1 to 1 and mean of 0
self.synaptic_weights = 2 * np.random.random((3, 1)) - 1
def sigmoid(self, x):
#applying the sigmoid function
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x):
#computing derivative to the Sigmoid function
return x * (1 - x)
def train(self, training_inputs, training_outputs, training_iterations):
#training the model to make accurate predictions while adjusting weights continually for
iteration in range(training_iterations):
#siphon the training data via the neuron
output = self.think(training_inputs) #computing error rate for back-propagation error =
training_outputs - output #performing weight adjustments
adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output))
self.synaptic_weights += adjustments
def think(self, inputs):
#passing the inputs via the neuron to get output #converting values to floats
inputs = inputs.astype(float)
REGISTER NO:420422104078
training_outputs = np.array([[0,1,1,0]]).T
#training taking place
neural_network.train(training_inputs, training_outputs, 15000)
print("Ending Weights After Training: ")
print(neural_network.synaptic_weights)
user_input_one = str(input("User Input One: "))
user_input_two = str(input("User Input Two: "))
user_input_three = str(input("User Input Three: "))
print("Considering New Situation: ", user_input_one, user_input_two, user_input_three)
print("New Output data: ")
print(neural_network.think(np.array([user_input_one, user_input_two, user_input_three])))
print("Wow, we did it!")
OUTPUT:
AIM:
To write a python program to build deep learning neural network models.
ALGORITHM:
1. Import necessary libraries and load the data
∙ Use the NumPy library to load your dataset and two classes from the Keras library to define
the model
∙ Use the Pima Indians onset of diabetes dataset
2. Create a Sequential model and add layers one at a time until satisfied with the architecture.
The model expects rows of data with 8 variables (the input_shape=(8,) argument). ∙ The first
hidden layer has 12 nodes and uses the relu activation function. ∙ The second hidden layer has
8 nodes and uses the relu activation function. ∙ The output layer has one node and uses the
sigmoid activation function. 3. Compile the model by specifying the loss function to use to
evaluate a set of weights, the optimizer used to search through different weights for the
network and metrics for reporting the training.
4. Train or fit the model on loaded data by calling the fit() function on the model. 5. Evaluate
the model on the training dataset using the evaluate() function and pass it the same input and
output used to train the model.
PROGRAM:
from numpy import loadtxt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# load the dataset
from google.colab import files
uploaded = files.upload()
dataset = loadtxt('diabetes.csv', delimiter=',', dtype=float, skiprows=1)
dataset[:, 0] = dataset[:, 0].astype(float)
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model
model = Sequential()
model.add(Dense(12, input_shape=(8,), activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # fit the
keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10, verbose=0)
# make class predictions with the model
predictions = (model.predict(X) > 0.5).astype(int)
# summarize the first 5 cases
REGISTER NO:420422104078
for i in range(5)
OUTPUT:
Result:
Thus the python program for building deep learning neural network model
executed successfully.