0% found this document useful (0 votes)
14 views

Lab Programs CS3491

ADSA

Uploaded by

mahizha
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Lab Programs CS3491

ADSA

Uploaded by

mahizha
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Ex.No.

1a:
IMPLEMENTATION OF BASIC SEARCH STRATEGIES – BFS

AIM:
To implement a python program for Breadth First Search (BFS)

Breadth-First Search

 Breadth-first search (BFS) is a traversing algorithm which starts from a selected


node (source or starting node) and explores all of the neighbour nodes at the
present depth before moving on to the nodes at the next level of depth.
 It must be ensured that each vertex of the graph is visited exactly once to avoid
getting into an infinite loop with cyclic graphs or to prevent visiting a given node
multiple times when it can be reached through more than one path.
 Breadth-first search can be implemented using a queue data structure, which
follows the first-in-first-out (FIFO) method – i.e., the node that was inserted first
will be visited first, and so on.

ALGORITHM:

Step 1: We start the process by considering any random node as the starting
vertex.
Step 2: We enqueue (insert) it to the queue and mark it as visited.
Step 3: Then we mark and enqueue all of its unvisited neighbours at the current
depth or continue to the next depth level if there is any.
Step 4: The visited vertices are removed from the queue.
Step 5: The process ends when the queue becomes empty.

PROGRAM:

graph={ '5':
['3','7'],
'3':['2','4'],
'7':['8'],
'2':[],

1 CS3491 AI & ML
'4':['8'],
'8':[]
}

visited =[]
queue=[]
def bfs(visited,graph,node):
visited.append(node)
queue.append(node)
while queue:
m=queue.pop(0)
print(m, end="")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
print ("The Breadth-First Search")
bfs(visited,graph,'5')
OUTPUT:

The Breadth-First Search


537248

RESULT:
Thus, the program for breadth-first search was implemented and executed
successfully.

2 CS3491 AI & ML
Ex.No.1b:
IMPLEMENTATION OF BASIC SEARCH STRATEGIES – DFS

AIM:
To implement a python code for Depth First Search (DFS)

ALGORITHM:

Step 1: Pick any node. If it is unvisited, mark it as visited and recur on all its adjacent nodes.

Step 2: Repeat until all the nodes are visited, or the node to be searched is found.

Step 3: Visited is a set that is used to keep track of visited nodes.

Step 4: The dfs function is called and is passed the Visited set, the graph in the form of a
dictionary, and A, which is the starting node.

Step 5: dfs follows the algorithm described above:

1. It first checks if the current node is unvisited - if yes, it is appended in the Visited set.
2. Then for each neighbor of the current node, the dfs function is invoked again.
3. The base case is invoked when all the nodes are visited. The function then returns.

PROGRAM:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}

visited = set()

def dfs(visited, graph, node): #function for dfs


if node not in visited:
print (node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)

print("The Depth-First Search")


dfs(visited, graph, '5')

3 CS3491 AI & ML
OUTPUT:

The Depth-First Search


5
3
2
4
8
7

RESULT:
Thus, the program for depth-first search was implemented and executed successfully.

4 CS3491 AI & ML
Ex. No:2a
IMPLEMENTATION OF A* SEARCH ALGORITHM

AIM:
To implement A* search algorithm for finding a shortest path in a graph.
A* SEARCH:
 A* search finds the shortest path through a search space to the goal state using
the heuristic function.
 This technique finds minimal cost solutions and is directed to a goal state called
A* search.
 The A* algorithm also finds the lowest-cost path between the start and goal state,
where changing from one state to another requires some cost.
STEPS FOR SOLVING A* SEARCH
 Given the graph, find the cost-effective path from A to G. That is A is the source
node and G is the goal node.

 Now from A, we can go to point B or E, so we compute f(x) for each of them,

A → B = g(B) + h(B) = 2 + 6 = 8

A → E = g(E) + h(E) = 3 + 7 = 10

 Since the cost for A → B is less, we move forward with this path and compute the
f(x) for the children nodes of B.
 Now from B, we can go to point C or G, so we compute f(x) for each of them,

A → B → C = (2 + 1) + 99= 102

A → B → G = (2 + 9 ) + 0 = 11

 Here the path A → B → G has the least cost but it is still more than the cost of A →
E, thus we explore this path further.
 Now from E, we can go to point D, so we compute f(x),

5 CS3491 AI & ML
A → E → D = (3 + 6) + 1 = 10

 Comparing the cost of A → E → D with all the paths we got so far and as this cost is
least of all we move forward with this path.
 Now compute the f(x) for the children of D

A → E → D → G = (3 + 6 + 1) +0 = 10

 Now comparing all the paths that lead us to the goal, we conclude that A → E → D
→ G is the most cost-effective path to get from A to G.

ALGORITHM:
Step 1: Place the starting node into OPEN and find its f (n) value.

Step 2: Remove the node from OPEN, having the smallest f (n) value. If it is a goal node then
stop and return success.

Step 3: Else remove the node from OPEN, find all its successors.

Step 4: Find the f (n) value of all successors; place them into OPEN and place the removed
node into CLOSE.

Step 5: Go to Step-2.

Step 6: Exit.
PROGRAM:
def aStarAlgo(start_node, stop_node):
open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {} # parents contains an adjacency map of all nodes
#distance of starting node from itself is zero
g[start_node] = 0
#start_node is root node i.e it has no parent nodes
#so start_node is set to its own parent node
parents[start_node] = start_node
while len(open_set) > 0:

6 CS3491 AI & ML
n = None
#node with lowest f() is found
for v in open_set:
if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):
n=v
if n == stop_node or Graph_nodes[n] == None:
pass
else:
for (m, weight) in get_neighbors(n):
#nodes 'm' not in first and last set are added to
first #n is set its parent
if m not in open_set and m not in closed_set:
open_set.add(m)
parents[m] = n
g[m] = g[n] + weight
#for each node m,compare its distance from start i.e g(m) to the
#from start through n node
else:
if g[m] > g[n] + weight:
#update g(m)
g[m] = g[n] + weight
#change parent of m to n
parents[m] = n
#if m in closed set,remove and add to open
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None

7 CS3491 AI & ML
# if the current node is the stop_node
# then we begin reconstructing the path from it to the start_node
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found: {}'.format(path))
return path
# remove n from the open_list, and add it to closed_list
# because all of his neighbors were inspected
open_set.remove(n)
closed_set.add(n)
print('Path does not exist!')
return None
#define fuction to return neighbor and its distance
#from the passed node
def get_neighbors(v):
if v in
Graph_nodes:
return Graph_nodes[v]
else:
return None
#for simplicity we ll consider heuristic distances given
#and this function returns heuristic distance for all nodes
def heuristic(n):
H_dist =
{ 'A': 11,
'B': 6,

8 CS3491 AI & ML
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
#Describe your graph here
Graph_nodes = {
'A': [('B', 6), ('F', 3)],
'B': [('A', 6), ('C', 3), ('D', 2)],
'C': [('B', 3), ('D', 1), ('E', 5)],
'D': [('B', 2), ('C', 1), ('E', 8)],
'E': [('C', 5), ('D', 8), ('I', 5), ('J', 5)],
'F': [('A', 3), ('G', 1), ('H', 7)],
'G': [('F', 1), ('I', 3)],
'H': [('F', 7), ('I', 2)],
'I': [('E', 5), ('G', 3), ('H', 2), ('J', 3)],
}
aStarAlgo('A', 'J')
OUTPUT:
Path found: ['A', 'F', 'G', 'I', 'J']

RESULT:
Thus, the program for A* search algorithm was implemented and executed
successfully.

9 CS3491 AI & ML
Ex.No:2b
IMPLEMENTATION OF MEMORY BOUNDED A* ALGORITHM

AIM:
To implement memory bounded A* search algorithm for path finding problem.
Memory bounded A* Search:
 Memory Bounded A* is a shortest path algorithm based on the A* algorithm.
 The main advantage is that it uses a bounded memory, while the A* algorithm might
need exponential memory. All other characteristics of are inherited from A*.
 This search is an optimal and complete algorithm for finding a least-cost path. Unlike
A*, it will not run out of memory, unless the size of the shortest path exceeds the
amount of space in available memory.
ALGORITHM:
Step 1: Works like A* until memory is full
Step 2: When memory is full, drop the leaf node with the highest f-value (the worst
leaf), keeping track of that worst value in the parent
Step 3: Complete if any solution is reachable
Step 4: Optimal if any optimal solution is reachable
Step 5: Otherwise, returns the best reachable solution

PROGRAM:
#Graph with Dictionary in format of ['Node', Weightage]
nodes = {
'A': [['B', 6], ['F', 3]],
'B': [['A', 6], ['C', 3], ['D', 2]],
'C': [['B', 3], ['D', 1], ['E', 5]],
'D': [['B', 2], ['C', 1], ['E', 8]],
'E': [['C', 5], ['D', 8], ['I', 5], ['J', 5]],
'F': [['A', 3], ['G', 1], ['H', 7]],
'G': [['F', 1], ['I', 3]],
'H': [['F', 7], ['I', 2]],
'I': [['G', 3], ['H', 2], ['E', 5], ['J', 3]],
'J': [['E', 5], ['I', 3]]
}

# Heuristics for each node


h={
'A' : 10,
'B' : 8,
'C' : 5,

10 CS3491 AI & ML
'D' : 7,
'E' : 3,
'F' : 6,
'G' : 5,
'H' : 3,
'I' : 1,
'J' : 0
}

def astar(start, goal):


opened = [] #To store paths that are not complete i.e. Goal isn't reached
closed = [] #To store paths that are complete i.i. Goal is reached
visited = set() #To store nodes that are already visited
opened.append([start, h[start]]) #Initialize start node
while opened : #To check paths that are not complete
min = 1000 #Can be any number that is much greater than the ones
in Graph to find the minimum length
val = '' #To store the current traversing
path for i in opened:
if i[1] < min: #To find the path with the lowest weight/length
min = i[1]
val = i[0]
closed.append(val)
visited.add(val)
if goal not in closed:
for i in nodes[val]:
if i[0] not in visited:
opened.append([i[0], (min-h[val]+i[1]+h[i[0]])]) #Adds previous weight and
the current heuristics and weight of the node
else:
break
opened.remove([val, min])

closed = closed[::-1]
min = 1000
for i in opened: #To get the minimum weight of the paths
if i[1] < min:
min = i[1]

lens = len(closed)
i=0
while i < lens-1:
nei = []
for j in nodes[closed[i]]:
nei.append(j[0])
if closed[i+1] not in nei:
del closed[i+1]
lens-=1
i+=1
closed = closed[::-1]

11 CS3491 AI & ML
return closed, min #Returns shortest path and the weight corresponding to it

# Start - 'A', Goal = 'J'


print(astar('A', 'J'))

OUTPUT:

(['A', 'F', 'G', 'I', 'J'], 10)

RESULT:
Thus, the program for memory bounded A* search algorithm was implemented and
executed successfully.

12 CS3491 AI & ML
Ex.No:3
IMPLEMENTATION OF NAIVE BAYES MODEL

AIM:
To implement a program for Naive Bayes model
NAIVE BAYES CLASSIFIER ALGORITHM
 Naive Bayes is among one of the very simple and powerful algorithms for
classification based on Bayes Theorem with an assumption of independence among
the predictors.
 The Naive Bayes classifier assumes that the presence of a feature in a class is not
related to any other feature.
 Naive Bayes is a classification algorithm for binary and multi-class classification
problems.
Bayes Theorem
 Based on prior knowledge of conditions that may be related to an event, Bayes
theorem describes the probability of the event
 conditional probability can be found this way
 Assume we have a Hypothesis(H) and evidence(E),
 According to Bayes theorem, the relationship between the probability of
Hypothesis before getting the evidence represented as P(H) and the
probability of the hypothesis after getting the evidence represented
as P(H|E) is:

P(H|E) = P(E|H)*P(H)/P(E)

ALGORITHM:
Step 1: Handling Data
Data is loaded from the .csv file and spread into training and tested assets.
Step 2: Summarizing the data
Summarise the properties in the training data set to calculate the probabilities and make
predictions.
Step 3: Making a Prediction
A particular prediction is made using a summarise of the data set to make a single prediction
Step 4: Making all the Predictions
Generate prediction given a test data set and a summarise data set.
Step 4: Evaluate Accuracy:
Accuracy of the prediction model for the test data set as a percentage correct out of them all
the predictions made.

13 CS3491 AI & ML
Step 4: Trying all together
Finally, we tie to all steps together and form our own model of Naive Bayes Classifier.
PROGRAM:
import pandas as pd
msg=pd.read_csv('C:/python/naivetext.csv',names=['message','label'])
print('The dimensions of the dataset',msg.shape)
msg['labelnum']=msg.label.map({'pos':1,'neg':0})
X=msg.message
y=msg.labelnum
print(X)
print(y)
OUTPUT:
The dimensions of the dataset (18, 2)
0 I love this sandwich
1 This is an amazing place
2 I feel very good about these beers
3 This is my best work
4 What an awesome view
5 I do not like this restaurant
6 I am tired of this stuff
7 I can't deal with this
8 He is my sworn enemy
9 My boss is horrible
10 This is an awesome place
11 I do not like the taste of this juice
12 I love to dance
13 I am sick and tired of this place
14 What a great holiday
15 That is a bad locality to stay
16 We will have good fun tomorrow
17 I went to my enemy's house today
Name: message, dtype: object
0 1
1 1
2 1
3 1
4 1
5 0
6 0
7 0
8 0
9 0
10 1
11 0
12 1
13 0
14 1

14 CS3491 AI & ML
15 0
16 1
17 0
Name: labelnum, dtype: int64
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest=train_test_split(X,y)
print ('\n The total number of Training Data :',ytrain.shape)
print ('\n The total number of Test Data :',ytest.shape)

OUTPUT:
The total number of Training Data : (13,)

The total number of Test Data : (5,)

from sklearn.feature_extraction.text import CountVectorizer


count_vect = CountVectorizer()
xtrain_dtm = count_vect.fit_transform(xtrain)
xtest_dtm=count_vect.transform(xtest)
print('\n The words or Tokens in the text documents \n')
print(count_vect.get_feature_names())
df=pd.DataFrame(xtrain_dtm.toarray(),columns=count_vect.get_feature_names())
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(xtrain_dtm,ytrain)
predicted = clf.predict(xtest_dtm)
from sklearn import metrics
print('\n Accuracy of the classifer is', metrics.accuracy_score(ytest,predicted))
OUTPUT:
Accuracy of the classifer is 0.8
print('\n Confusion matrix')
print(metrics.confusion_matrix(ytest,predicted))
print('\n The value of Precision' ,
metrics.precision_score(ytest,predicted)) print('\
n The value of Recall' ,
metrics.recall_score(ytest,predicted))

15 CS3491 AI & ML
OUTPUT:
Confusion matrix
[[3 0]
[1 1]]

The value of Precision 1.0

The value of Recall 0.5

RESULT:
Thus, the program for Naive Bayes model was implemented and executed
successfully.

16 CS3491 AI & ML
Ex.No:4 IMPLEMENTATION OF BAYESIAN NETWORKS

AIM:

To write a program to construct a Bayesian network.

ALGORITHM:

Step 1: Read Cleveland Heart Disease data.

Step 2. Display the data.

Step 3. Display the Attributes names and datatyes.

Step 4. Create Model- Bayesian Network.

Step 5. Learn CPDs using Maximum Likelihood Estimators

Step 6. Compute the Probability of HeartDisease given restecg.

Step 7. computing the Probability of HeartDisease given cp.

Data set:heart.csv [use google colab and upload heart.csv in the drive
Use “pip install pgmpy” to import pgmpy]
PROGRAM:

from google.colab import drive


drive.mount('/content/drive')

import numpy as np
import pandas as pd
import csv
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.models import BayesianModel
from pgmpy.inference import VariableElimination
heartDisease = pd.read_csv('/content/drive/MyDrive/heart.csv')
heartDisease = heartDisease.replace('?',np.nan)
print('Sample instances from the dataset are given below')
print(heartDisease.head())

17 CS3491 AI & ML
Sample instances from the dataset are given below
age gender cp trestbps chol fbs restecg thalach exang oldpeak

0 63 1 1 145 233 1 2 150 0 2.3


1 67 1 4 160 286 0 2 108 1 1.5
2 67 1 4 120 229 0 2 129 1 2.6
3 37 1 3 130 250 0 0 187 0 3.5
4 41 0 2 130 204 0 2 172 0 1.4

slope ca thal heartdisease


0 3 0 6 0
1 2 3 3 2
2 2 2 7 1
3 3 0 3 0
4 1 0 3 0

print('\n Attributes and datatypes')


print(heartDisease.dtypes)
Attributes and datatypes
age int64
gender int64
cp int64
trestbps int64
chol int64
fbs int64
restecg int64
thalach int64
exang int64
oldpeak float64
slope int64
ca object
thal object
heartdisease int64
dtype: object

model=BayesianModel([('age','heartdisease'),('gender','heartdisease'),('exang','heartdisease'),('
cp','heartdisease'),('heartdisease','restecg'),('heartdisease','chol')])

print('\n Learning CPD using Maximum likelihood estimators')

model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)

print('\n Inferencing with Bayesian Network:')

HeartDiseasetest_infer = VariableElimination(model)

print('\n 1.Probability of HeartDisease given evidence=restecg :1')

q1=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'restecg':2})

print(q1)

18 CS3491 AI & ML
print('\n 2.Probability of HeartDisease given evidence= cp:2 ')

q2=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'cp':2})

print(q2)

OUTPUT:

Learning CPD using Maximum likelihood estimators

Inferencing with Bayesian Network:

1. Probability of HeartDisease given evidence=restecg :1


+ + +
| heartdisease | phi(heartdisease) |
+=================+=====================+
| heartdisease(0) | 0.2697 |
+ + +
| heartdisease(1) | 0.2195 |
+ + +
| heartdisease(2) | 0.1522 |
+ + +
| heartdisease(3) | 0.1763 |
+ + +
| heartdisease(4) | 0.1822 |
+ + +

2. Probability of HeartDisease given evidence= cp:2


+ + +
| heartdisease | phi(heartdisease) |
+=================+=====================+
| heartdisease(0) | 0.3610 |
+ + +
| heartdisease(1) | 0.2159 |
+ + +
| heartdisease(2) | 0.1373 |
+ + +
| heartdisease(3) | 0.1537 |
+ + +
| heartdisease(4) | 0.1321 |
+ + +

RESULT:

Thus, the program to construct a Bayesian network was implemented and executed
successfully.

19 CS3491 AI & ML
Ex.No:5a

IMPLEMENTATION OF REGRESSION MODELS (LINEAR REGRESSION)

AIM:

To write a program to implement linear regression for modelling relationships


between a dependent variable with a given set of independent variables.

DEFINITION:

Let us consider a dataset where we have a value of response y for every feature x:

Now, the task is to find a line that fits best in the above scatter plot so that we can predict
the response for any new feature values. (i.e a value of x not present in a dataset)
This line is called a regression line.

20 CS3491 AI & ML
ALGORITHM:

Step 1. Initialize the parameters.


Step 2. Predict the value of a dependent variable by given an independent variable.
Step 3. Calculate the error in prediction for all data points.
Step 4. Calculate partial derivative w.r.t a0 and al.
Step 5. Calculate the cost for each number and add them.
Step 6. Update the values of a0 and a1.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt

def estimate_coef(x, y):


# number of observations/points
n = np.size(x)

# mean of x and y vector


m_x = np.mean(x)
m_y = np.mean(y)

# calculating cross-deviation and deviation about x


SS_xy = np.sum(y*x) - n*m_y*m_x
SS_xx = np.sum(x*x) - n*m_x*m_x

# calculating regression coefficients


b_1 = SS_xy / SS_xx
b_0 = m_y - b_1*m_x

return (b_0, b_1)

def plot_regression_line(x, y, b):


# plotting the actual points as scatter plot
plt.scatter(x, y, color = "m",
marker = "o", s = 30)

# predicted response vector


y_pred = b[0] + b[1]*x

# plotting the regression line


plt.plot(x, y_pred, color = "g")

# putting labels
plt.xlabel('x')
plt.ylabel('y')

21 CS3491 AI & ML
# function to show plot
plt.show()

def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])

# estimating coefficients
b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
# plotting regression line
plot_regression_line(x, y, b)

if name == " main ":


main()

OUTPUT:

RESULT:
Thus, the program to implement linear regression model was implemented and
executed successfully.

22 CS3491 AI & ML
Ex.No:5b

IMPLEMENTATION OF REGRESSION MODELS (LOGISTIC REGRESSION)

AIM:

To implement Logistic Regression using Python.


ALGORITHM:
1. Import all the library function
2. Import make classification from sklearn datasets
3. Generate Dataset for Logistic Regression
4. Import pyplot from matplotlib
5. Classify the Dataset based on the given features.
PROGRAM:
from sklearn.datasets import make_classification
from matplotlib import pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import pandas as pd
# Generate and dataset for Logistic Regression
x, y = make_classification(
n_samples=100,
n_features=1,
n_classes=2,
n_clusters_per_class=1,
flip_y=0.03,
n_informative=1,
n_redundant=0,
n_repeated=0
)
print(x,y)

23 CS3491 AI & ML
OUTPUT:
[[ 0.68072366]
[-0.806672 ]
[-0.25986635]
[-0.96951576]
[-1.55870949]
[-0.71107565]
[ 0.05858082]
[-2.06472972]
[-0.61592043]
[ 1.25423915]
[ 0.81852686]
[-1.65141186]
[-0.5894455 ]
[ 1.02745431]
[-0.32508896]
[-0.53886171]
[ 1.14821234]
[ 0.87538478]
[ 0.95887802]
[ 1.30514551]
[-1.02478688]
[ 0.16563384]
[ 0.77626036]
[-1.00622251]
[-0.55976575]
[ 1.33550038]
[ 1.60327317]
[ 1.82115858]
[-0.68603388]
[ 1.8733355 ]
[-0.52494619]
[-2.03314002]
[ 0.47001797]
[ 1.55400671]
[-1.34062378]
[-0.38624537]
[-1.06339387]
[-1.41465045]
[ 0.58850401]
[ 0.80925135]
[-0.82066568]
[-0.01262654]
[-0.75104194]
[-1.09609801]
[-0.30652093]
[-0.6945338 ]
[-0.90156651]
[-0.96587756]
[ 0.53851931]
[ 0.16533166]
[-1.04609567]
[-1.15065139]
[-0.76739642]
[ 0.83776929]
[ 2.20562241]
[-0.80368921]

24 CS3491 AI & ML
[-0.86160904]
[ 0.86032131]
[-0.65752318]
[ 1.81228279]
[-0.81507664]
[ 0.93532773]
[ 1.76874632]
[ 0.32893072]
[ 1.02960085]
[-1.84150254]
[ 0.16156709]
[-1.05944665]
[ 0.28788136]
[-1.05549933]
[ 1.37528673]
[ 1.66369265]
[ 1.71761177]
[ 1.96597594]
[-0.65315492]
[-0.29598263]
[-1.15345006]
[-1.03851861]
[ 1.69109822]
[ 1.92402678]
[-0.89593983]
[-0.58208549]
[-1.18750595]
[-1.06231671]
[-0.79230653]
[ 1.42147278]
[ 1.2887393 ]
[ 1.93706073]
[-1.03110736]
[-1.20543711]
[ 0.79446549]
[ 1.29599432]
[ 0.49396915]
[ 0.63241066]
[ 0.72416825]
[-1.76099355]
[-0.61639759]
[-0.43854548]
[ 1.43886371]
[-0.77167438]] [1 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 1 1 0 0 1 0 0 1 1
1 0 1 1 0 1 1 0 0 0
0 1 1 0 1 1 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 0 0 0 1 0 1 1
1 1
0 1 0 0 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 1 1 0 0 0 1 0]

# Create a scatter plot


plt.scatter(x, y, c=y, cmap='rainbow')
plt.title('Scatter Plot of Logistic Regression')
plt.show()

25 CS3491 AI & ML
# Split the dataset into training and test dataset
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1)
x_train.shape

OUTPUT:
(75, 1)

log_reg=LogisticRegression()
log_reg.fit(x_train, y_train)
y_pred=log_reg.predict(x_test)
confusion_matrix(y_test, y_pred)

OUTPUT:

array([[12, 0],
[ 2, 11]], dtype=int64)

RESULT:
Thus, the program to implement logistic regression model was implemented and executed
successfully.

26 CS3491 AI & ML
Ex.No:6a IMPLEMENTATION OF DECISION TREE

AIM:

To write a program to implement a decision tree algorithm.

ALGORITHM:

 Import Decision tree classifier from sklearn model


 Import train_test_split from sklearn.model
 Import accuracy_score from sklearn.metrics
 Import classification_report from sklearn.metrics
 Read the dataset values from the provided URL
 Print the dataset shape
 Print the dataset observation
 Separate the target variable
 Splitting the dataset into train and test
 Perform training with giniIndex
 Creating the classifier object
 Perform training with entropy
 Create Function to make prediction from the given dataset
 Create Function to calculate accuracy from the given dataset.

PROGRAM:
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
def importdata():
balance_data = pd.read_csv(
'https://archive.ics.uci.edu/ml/machine-learning-'+
'databases/balance-scale/balance-scale.data',

27 CS3491 AI & ML
sep= ',', header = None)
# Printing the dataswet shape
print ("Dataset Length: ", len(balance_data))
print ("Dataset Shape: ", balance_data.shape)

# Printing the dataset obseravtions


print ("Dataset: ",balance_data.head())
return balance_data

def splitdataset(balance_data):
# Separating the target variable
X = balance_data.values[:, 1:5]
Y = balance_data.values[:, 0]
# Splitting the dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size = 0.3, random_state = 100)
return X, Y, X_train, X_test, y_train, y_test
# Function to perform training with giniIndex.
def train_using_gini(X_train, X_test,
y_train): # Creating the classifier
object
clf_gini = DecisionTreeClassifier(criterion = "gini", random_state =
100,max_depth=3, min_samples_leaf=5)
# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini
# Function to perform training with entropy.
def tarin_using_entropy(X_train, X_test, y_train):
# Decision tree with entropy
clf_entropy = DecisionTreeClassifier(criterion = "entropy", random_state = 100,
max_depth = 3, min_samples_leaf = 5)
28 CS3491 AI & ML
# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy
# Function to make predictions
def prediction(X_test,
clf_object):
# Predicton on test with giniIndex
y_pred = clf_object.predict(X_test)
print("Predicted values:")
print(y_pred)
return y_pred
# Function to calculate accuracy
def cal_accuracy(y_test, y_pred):
print("Confusion Matrix: ",confusion_matrix(y_test, y_pred))
print ("Accuracy : ",accuracy_score(y_test,y_pred)*100)
print("Report : ",classification_report(y_test, y_pred))

# Driver code
def main():
# Building Phase
data = importdata()
X, Y, X_train, X_test, y_train, y_test = splitdataset(data)
clf_gini = train_using_gini(X_train, X_test, y_train)
clf_entropy = tarin_using_entropy(X_train, X_test, y_train)
# Operational Phase
print("Results Using Gini Index:")
# Prediction using gini
y_pred_gini = prediction(X_test, clf_gini)
cal_accuracy(y_test, y_pred_gini)
print("Results Using Entropy:")
# Prediction using entropy

29 CS3491 AI & ML
y_pred_entropy = prediction(X_test, clf_entropy)
cal_accuracy(y_test, y_pred_entropy)

# Calling main function


if name ==" main ":
main()
OUTPUT:
Dataset Length: 625
Dataset Shape: (625, 5)
Dataset: 0 1 2 3 4
0 B 1 1 1 1
1 R 1 1 1 2
2 R 1 1 1 3
3 R 1 1 1 4
4 R 1 1 1 5
Results Using Gini Index:
Predicted values:
['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L
'
'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L
'
'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R
'
'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L
'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L
'
'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R
'
'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R
'
'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R
'
'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']
Confusion Matrix: [[ 0 6 7]
[ 0 67 18]
[ 0 19 71]]
Accuracy : 73.40425531914893
Report : precision recall f1-score support

B 0.00 0.00 0.00 13


L 0.73 0.79 0.76 85
R 0.74 0.79 0.76 90

accuracy 0.73 188


macro avg 0.49 0.53 0.51 188
weighted avg 0.68 0.73 0.71 188

30 CS3491 AI & ML
Results Using Entropy:
Predicted values:
['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L
'
'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L
'
'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L
'
'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L
'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L
'
'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R
'
'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R
'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']
Confusion Matrix: [[ 0 6 7]
[ 0 63 22]
[ 0 20 70]]
Accuracy : 70.74468085106383
Report : precision recall f1-score support

B 0.00 0.00 0.00 13


L 0.71 0.74 0.72 85
R 0.71 0.78 0.74 90

accuracy 0.71 188


macro avg 0.47 0.51 0.49 188
weighted avg 0.66 0.71 0.68 188

RESULT:
Thus, the program to construct decision tree was implemented and executed successfully.

31 CS3491 AI & ML
Ex.No:6b IMPLEMENTATION OF RANDOM FOREST

AIM:

To write a program to implement random forest algorithm in Python.

ALGORITHM:

1. Import Load digits from sklearn.datasets


2. Import Random forest classifier from sklearn datasets
3. Train the given dataset using Random Forest Classifier.
4. Obtain the score from the trained dataset.
PROGRAM:
import pandas as pd
from sklearn.datasets import load_digits
digits = load_digits()
dir(digits)

%matplotlib inline
import matplotlib.pyplot as plt
plt.gray()
for i in range(4):
plt.matshow(digits.images[i])

32 CS3491 AI & ML
df = pd.DataFrame(digits.data)
df.head()

df['target'] = digits.target
df[0:12]

X = df.drop('target',axis='columns')
y = df.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=20)
model.fit(X_train, y_train)
model.score(X_test, y_test)

y_predicted = model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_predicted)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sn
plt.figure(figsize=(10,7))
sn.heatmap(cm, annot=True)
plt.xlabel('Predicted')
plt.ylabel('Truth')

33 CS3491 AI & ML
RESULT:

Thus, the program to construct random forest was implemented and executed successfully.

34 CS3491 AI & ML
Ex.No:7 IMPLEMENTATION OF SVM MODEL

AIM:

To write a program in Python to implement Support Vector Machine model.

ALGORITHM:

1. From sklearn datasets import load_iris.


2. Display the feature names from load_iris
3. Import pyplot from from matplotlib.
4. Find the sepal length and sepal width from the given dataset.
5. Find the petal length and petal width from the trained dataset.

PROGRAM:

import pandas as pd

from sklearn.datasets import load_iris

iris = load_iris()

dir(iris)

iris.feature_names

df=pd.DataFrame(iris.data, columns=iris.feature_names)

df.head()

35 CS3491 AI & ML
df['target']=iris.target

df.head()

iris.target_names

df[df.target==2].head

df['flower_name']=df.target.apply(lambda x:iris.target_names[x])

df.head()

36 CS3491 AI & ML
from matplotlib import pyplot as plt

%matplotlib inline

df0=df[df.target==0]

df1=df[df.target==1]

df2=df[df.target==2]

df2.head()

plt.xlabel('sepal length (cm)')

plt.ylabel('sepal width (cm)')

plt.scatter(df0['sepal length (cm)'],df0['sepal width (cm)'],color='green',marker='+')

plt.scatter(df1['sepal length (cm)'],df1['sepal width (cm)'],color='blue',marker='*')

plt.xlabel('petal length (cm)')

plt.ylabel('petal width (cm)')

plt.scatter(df0['petal length (cm)'],df0['petal width (cm)'],color='green',marker='+')

37 CS3491 AI & ML
plt.scatter(df1['petal length (cm)'],df1['petal width (cm)'],color='blue',marker='*')

from sklearn.model_selection import train_test_split

x = df.drop(['target','flower_name'], axis='columns')

y=df.target

x_train, x_test, y_train, y_test=

train_test_split(x,y,test_size=0.2) len(x_train)

len(x_test)

from sklearn.svm import SVC

model = SVC(kernel='linear')

model.fit(x_train, y_train)

model.score(x_test, y_test)

RESULT:

Thus, the program to construct SVM model was implemented and executed successfully.

38 CS3491 AI & ML
Ex.No:8 IMPLEMENTATION OF ENSEMBLING TECHNIQUES(BAGGING)

AIM:

To write a program to implement the ensembling technique, bagging in Python.

ALGORITHM:

 Import the panda’s library


 Read the dataset from the path “C:/python/pima.csv”
 Display the first five rows from the dataframe using head() function.
 Returns the number of missing values in the dataset using isnull.sum() function.
 Preprocess the given dataset using Standard scalar function
 Find the cross value score using Decision tree classifier.

PROGRAM:

import pandas as pd

df = pd.read_csv("C:/python/pima.csv")

df.head()

df.isnull().sum()

39 CS3491 AI & ML
df.describe()

df.diabetes.value_counts()

X = df.drop("diabetes",axis="columns")

y = df.diabetes

from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()

X_scaled = scaler.fit_transform(X)

X_scaled[:3]

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, stratify=y, random_state=10)

X_train.shape

40 CS3491 AI & ML
X_test.shape

y_train.value_counts()

201/375

y_test.value_counts()

67/125

from sklearn.model_selection import cross_val_score

from sklearn.tree import DecisionTreeClassifier

scores = cross_val_score(DecisionTreeClassifier(), X, y, cv=5)

scores

scores.mean()

41 CS3491 AI & ML
from sklearn.ensemble import BaggingClassifier

bag_model = BaggingClassifier(

base_estimator=DecisionTreeClassifier(),

n_estimators=100,

max_samples=0.8,

oob_score=True,

random_state=0

bag_model.fit(X_train, y_train)

bag_model.oob_score_

bag_model.score(X_test, y_test)

bag_model = BaggingClassifier(

base_estimator=DecisionTreeClassifier(),

n_estimators=100,

max_samples=0.8,

oob_score=True,

random_state=0

scores = cross_val_score(bag_model, X, y, cv=5)

42 CS3491 AI & ML
scores.mean()

RESULT:

Thus, the program to implement ensemble technique such as bagging was implemented and
executed successfully.

43 CS3491 AI & ML
Ex.No:9 IMPLEMENTATION OF CLUSTERING ALGORITHMS (K - MEANS)

AIM:

To write a program to implement K – means clustering algorithm in Python.

ALGORITHM:

Step 1: Accept the number of clusters to group data into and the dataset to cluster as input
values

Step 2: Initialize the first K clusters Take first k instances or Take Random sampling of k
elements

Step 3: Calculate the arithmetic means of each cluster formed in the dataset.

Step 4: K-means assigns each record in the dataset to only one of the initial clusters Each
record is assigned to the nearest cluster using a measure of distance (e.g Euclidean distance).

Step 5: K-means re-assigns each record in the dataset to the most similar cluster and re-
calculates the arithmetic mean of all the clusters in the dataset.

PROGRAM:

from sklearn.cluster import KMeans

import pandas as pd

from sklearn.preprocessing import MinMaxScaler

from matplotlib import pyplot as plt

from sklearn.datasets import load_iris

%matplotlib inline

iris = load_iris()

df = pd.DataFrame(iris.data,columns=iris.feature_names)

df.head()

44 CS3491 AI & ML
df['flower'] = iris.target

df.head()

df.drop(['sepal length (cm)', 'sepal width (cm)', 'flower'],axis='columns',inplace=True)

df.head(3)

km = KMeans(n_clusters=3)

yp = km.fit_predict(df)

45 CS3491 AI & ML
df['cluster'] = yp

df.head(2)

df.cluster.unique()

df1 =

df[df.cluster==0] df2

= df[df.cluster==1]

df3 = df[df.cluster==2]

plt.scatter(df1['petal length (cm)'],df1['petal width (cm)'],color='blue')

plt.scatter(df2['petal length (cm)'],df2['petal width (cm)'],color='green')

plt.scatter(df3['petal length (cm)'],df3['petal width (cm)'],color='yellow')

46 CS3491 AI & ML
sse = []

k_rng = range(1,10)

for k in k_rng:

km = KMeans(n_clusters=k)

km.fit(df)

sse.append(km.inertia_)

plt.xlabel('K')

plt.ylabel('Sum of squared error')

plt.plot(k_rng,sse)

RESULT:

Thus, the program to implement K means clustering algorithm was implemented and executed
successfully.

47 CS3491 AI & ML
Ex. No: 10 EM ALGORITHM FOR BAYESIAN NETWORKS

AIM:

To write a program to implement the EM algorithm for Bayesian Network

ALGORITHM:

Step 1: Given a set of incomplete data, start with a set of initialized parameters.

Step 2: Expectation step (E-step): In this expectation step, by using the observed available
data of the dataset, try to estimate or guess the values of the missing data. Finally, after this
step, complete data is obtained having no missing values.

Step 3: Maximization step (M-step): Use the complete data, which is prepared in the
expectation step, and update the parameters.

Step 4: Repeat step 2 and step 3 until it converges to a solution.

PROGRAM:

import numpy as np

np.random.seed(42)

# True probabilities for our network

true_prob_A = 0.6

true_prob_B_given_A = np.array([[0.8, 0.2], [0.3, 0.7]])

# Generate observed data

sample_size = 1000

data_A = np.random.choice([0, 1], size=sample_size, p=[1-true_prob_A,true_prob_A])

data_B = np.zeros(sample_size)

for i in range(sample_size):

data_B[i] = np.random.choice([0, 1], p=true_prob_B_given_A[data_A[i]])

def expectation_step(data_A, data_B, prob_A, prob_B_given_A):

# E-step: Expectation

48 CS3491 AI & ML
# Compute the expected values of hidden variables (none here)

return None

def maximization_step(data_A, data_B, hidden_variables):

prob_A = np.mean(data_A)

# Estimate conditional probability of B given A

prob_B_given_A = np.zeros((2, 2))

for a in [0, 1]:

data_B_given_A = data_B[data_A == a]

prob_B_given_A[a] = [np.mean(data_B_given_A == 0),

np.mean(data_B_given_A == 1)]

return prob_A, prob_B_given_A

estimated_prob_A = 0.5

estimated_prob_B_given_A = np.array([[0.5, 0.5], [0.5, 0.5]])

# Perform EM iterations

num_iterations = 10

for i in range(num_iterations):

hidden_vars=expectation_step(data_A,data_B,estimated_prob_A,estimated_prob_B_given_

A)

estimated_prob_A,estimated_prob_B_given_A=maximization_step(data_A,data_B,

hidden_vars)

# Print the estimated parameters

print("Estimated probability of A:", estimated_prob_A)

print("Estimated conditional probability of B given A:")

print(estimated_prob_B_given_A)

49 CS3491 AI & ML
Estimated probability of A: 0.579
Estimated conditional probability of B given A:
[[0. 0. ]
[0.2970639 0.7029361]]

RESULT:

Thus, the program to implement the EM algorithm for Bayesian Network was implemented
and executed successfully.

50 CS3491 AI & ML
Ex.No:11 BUILDING SIMPLE NEURAL NETWORK

AIM:

To write a program to implement a simple Neural Network model.

ALGORITHM:

Step 1: Define the activation function

Step 2: Train the input values and obtain the output from the given dataset.

Step 3: Test the given dataset from the output obtained from the given dataset

Step 4: Obtain the forward and Backward pass from the trained dataset

PROGRAM

# importing dependancies

import numpy as np

# The activation function

def activation(x):

return 1 / (1 + np.exp(-x))

weights = np.random.uniform(-1,1,size = (2, 1))

training_inputs = np.array([[0, 0, 1, 1, 0, 1]]).reshape(3, 2)

training_outputs = np.array([[0, 1, 1]]).reshape(3,1)

for i in range(15000):

# forward pass

dot_product = np.dot(training_inputs, weights)

output = activation(dot_product)

# backward pass.

temp2 = -(training_outputs - output) * output * (1 - output)

adj = np.dot(training_inputs.transpose(), temp2)

51 CS3491 AI & ML
# 0.5 is the learning rate.

weights = weights - 0.5 * adj

# The testing set

test_input = np.array([1, 0])

test_output = activation(np.dot(test_input, weights))

# OR of 1, 0 is 1

print(test_output)

OUTPUT:

RESULT:

Thus, the program to build a simple Neural Network model was implemented and executed
successfully.

52 CS3491 AI & ML
Ex.No:12 BUILDING DEEP LEARNING NEURAL NETWORK MODEL

AIM:

To write a program to implement a deep learning Neural Network model.

ALGORITHM:

Step 1: Load data from the test file using import loadtxt

Step 2: Import sequential from tensorflow

Step 3: Import Dense from tensor flow

Step 4: Load data from from the test file from the path 'C:/python/pima-indians-
diabetes.csv', delimiter=','

Step 5: Split the given dataset into input and output variables.

Step 6: Fit the keras model on the given dataset.

PROGRAM:

from numpy import loadtxt

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

dataset = loadtxt('C:/python/pima-indians-diabetes.csv', delimiter=',')

# split into input (X) and output (y) variables

X = dataset[:,0:8]

y = dataset[:,8]

# define the keras model

model = Sequential()

model.add(Dense(12, input_shape=(8,), activation='relu'))

model.add(Dense(8, activation='relu'))

model.add(Dense(1, activation='sigmoid'))

53 CS3491 AI & ML
# compile the keras model

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# fit the keras model on the dataset

model.fit(X, y, epochs=150, batch_size=10)

# evaluate the keras model

_, accuracy = model.evaluate(X, y)

print('Accuracy: %.2f' % (accuracy*100))

model.fit(X, y, epochs=150, batch_size=10,

verbose=0) # evaluate the keras model

_, accuracy = model.evaluate(X, y,

verbose=0) # make probability predictions

with the model predictions = model.predict(X)

# round predictions

rounded = [round(x[0]) for x in predictions]

# make class predictions with the model

54 CS3491 AI & ML
predictions = (model.predict(X) > 0.5).astype(int)

from numpy import loadtxt

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

# load the dataset

dataset = loadtxt('C:/python/pima-indians-diabetes.csv', delimiter=',')

# split into input (X) and output (y) variables

X = dataset[:,0:8]

y = dataset[:,8]

# define the keras model

model = Sequential()

model.add(Dense(12, input_shape=(8,), activation='relu'))

model.add(Dense(8, activation='relu'))

model.add(Dense(1, activation='sigmoid'))

# compile the keras model

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# fit the keras model on the dataset

model.fit(X, y, epochs=150, batch_size=10, verbose=0)

# make class predictions with the model

predictions = (model.predict(X) > 0.5).astype(int)

# summarize the first 5 cases

for i in range(5):

print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))

55 CS3491 AI & ML
RESULT:

Thus, the program to build a deep Neural Network model was implemented and executed
successfully.

56 CS3491 AI & ML

You might also like