0% found this document useful (0 votes)
29 views49 pages

Aiml Lab New

The document outlines the practical laboratory work for the Artificial Intelligence and Machine Learning course at Bharathidasan Engineering College. It includes various experiments such as implementing search algorithms (BFS, DFS, A*, Memory-Bounded A*), Naïve Bayes, Bayesian Networks, regression models, and decision trees. Each experiment contains code snippets, aims, algorithms, and results to demonstrate the implementation and outcomes.

Uploaded by

guru99prasath752
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views49 pages

Aiml Lab New

The document outlines the practical laboratory work for the Artificial Intelligence and Machine Learning course at Bharathidasan Engineering College. It includes various experiments such as implementing search algorithms (BFS, DFS, A*, Memory-Bounded A*), Naïve Bayes, Bayesian Networks, regression models, and decision trees. Each experiment contains code snippets, aims, algorithms, and results to demonstrate the implementation and outcomes.

Uploaded by

guru99prasath752
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

BHARATHIDASAN ENGINEERING COLLLEGE

(Approved by AICTE, New Delhi and Affiliated to Anna University)


NATTRAMPALLI – 635 854.

NAME :

REGISTER NUMBER :

SUBJECT CODE/ NAME : CS3491 – ARTIFICIAL INTELLIGENCE


AND MACHINE LEARNING
LABORATORY
YEAR / SEMESTER : II / IV

MAY – JUNE - 2025

DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERIG
BHARATHIDASAN ENGINEERING COLLLEGE
(Approved by AICTE, New Delhi and Affiliated to Anna University)
NATTRAMPALLI – 635 854.

Department of Computer Science & Engineering

BONAFIDE CERTIFICATE

This is to certify that Mr. / Ms. is


studying in the Department of Computer Science and Engineering during the
academic year 2025 (May - June) Semester – 4th. This record is submitted by
the above student for University Examination for the

Staff In-Charge Head of the Department

Register Number:

Submitted for the practical examination held on …………………. at


Bharathidasan Engineering College, Nattrampalli.

Internal Examiner External Examiner


TABLE OF CONTENT
Exp.No Date Name of the Experiment Page No Signature
Exp.No:01 IMPLEMENTING BREADTH-FIRST SEARCH (BFS) AND
Date: DEPTH- FIRST SEARCH (DFS)

Aim:

Algorithm:
Program: BFS

from collections import defaultdict


class Graph:
def init (self):
self.graph=defaultdict(list)
def addEdge(self,u,v):
self.graph[u].append(v)
def BFS(self,s):
visited=[False]*(max(self.graph)+1)
queue=[]
queue.append(s)
visited[s]=True
while queue:
s=queue.pop(0)
print(s,end="")
for i in self.graph[s]:
if visited[i]==False:
queue.append(i)
visited[i]=True
if name ==" main ":
g = Graph()
g.addEdge(0,1)
g.addEdge(0,2)
g.addEdge(1,2)
g.addEdge(2,0)
g.addEdge(2,3)
g.addEdge(3,3)
print("the following BSF traversal is started from the vertex 2")
g.BFS(2)
Program DFS

from collections import defaultdict


class Graph:
def init (self):
self.graph=defaultdict(list)
def addEdge(self,u,v):
self.graph[u].append(v)
def DFSUtil(self,v,visited):
visited.add(v)
print(v, end=' ')
for neighbour in self.graph[v]:
if neighbour not in visited:
self.DFSUtil(neighbour, visited)
def DFS(self,v):
visited =set()
self.DFSUtil(v,visited)
if name ==" main ":
g = Graph() g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)
print("Following is Depth First Traversal (starting from vertex 2)")
# Function call
g.DFS(2)
Output:
BFS
Following is the Breadth-First Search 5 3 7 2 4 8
DFS
Following is the Depth-First Search 5
3
2
4
8
7

Result:
Exp.No:02 IMPLEMENTING INFORMED SEARCH ALGORITHMS LIKE
Date: A* AND MEMORY-BOUNDED A*

Aim:

Algorithm:
Program:
from queue import PriorityQueue
v =14
graph =[[] for i in range(v)]
def best_first_search(actual_Src, target, n):
visited =[False] *n
pq =PriorityQueue()
pq.put((0, actual_Src))
visited[actual_Src] =True
while pq.empty() ==False:
u =pq.get()[1]
print(u, end=" ")
if u ==target:
break
for v, c in graph[u]:
if visited[v] ==False:
visited[v] =True
pq.put((c, v))
print()
def addedge(x, y, cost):
graph[x].append((y, cost))
graph[y].append((x, cost))
if name ==" main ":
addedge(0, 1, 3)
addedge(0, 2, 6)
addedge(0, 3, 5)
addedge(1, 4, 9)
addedge(1, 5, 8)
addedge(2, 6, 12)
addedge(2, 7, 14)
addedge(3, 8, 7)
addedge(8, 9, 5)
addedge(8, 10, 6)
addedge(9, 11, 1)
addedge(9, 12, 10)
addedge(9, 13, 2)
source =0
target =9
best_first_search(source, target, v)

Memory Bounded A *
Import heapq
import math
classPriorityQueue:
"""Priority queue implementation using heapq"""
def init (self): self.elements = []
defis_empty(self):
returnlen(self.elements) == 0
def put(self, item, priority):
heapq.heappush(self.elements, (priority, item))
def get(self):
returnheapq.heappop(self.elements)[1]
class Node:
"""Node class for representing the search tree""" def
init (self, state, parent=None, action=None, path_cost=0):
self.state = state
self.parent = parent
self.action = action
self.path_cost = path_cost
def lt (self, other):
returnself.path_cost + heuristic(self.state) <other.path_cost + heuristic(other.state)
def eq (self, other):
returnself.state == other.state
def heuristic(state):
"""Heuristic function for estimating the cost to reach the goal state"""
# Example heuristic function - Euclidean distance to the goal goal_state = (0, 0)
# Replace with actual goal state
returnmath.sqrt((state[0] - goal_state[0])**2 + (state[1] - goal_state[1])**2)
defmemory_bounded_a_star_search(start_state, max_memory):
"""Memory-bounded A* search algorithm"""
frontier = PriorityQueue()
frontier.put(Node(start_state), 0)
explored = set() memory ={start_state: 0}
while not frontier.is_empty():
node = frontier.get() ifnode.state not in explored:
explored.add(node.state)
ifis_goal_state(node.state): returnget_solution_path(node)
forchild_state, action, step_cost in get_successor_states(node.state):
child_node = Node(child_state, node, action, node.path_cost + step_cost)
child_node_f = child_node.path_cost + heuristic(child_state) ifchild_state not in memory or
child_node_f< memory[child_state]:
frontier.put(child_node, child_node_f)
memory[child_state] = child_node_f
whilememory_usage(memory) >max_memory:
state_to_remove = min(memory, key=memory.get) del
memory[state_to_remove]
return None
defget_successor_states(state):
"""Function for generating successor states"""
# Replace with actual successor state generation logic return []
defis_goal_state(state):
"""Function for checking if a state is the goal state"""
# Replace with actual goal state checking logic return False
defget_solution_path(node):
"""Function for retrieving the solution path"""
path = [] whilenode.parent is not None:
path.append((node.action, node.state))
node = node.parent path.reverse()
return path
defmemory_usage(memory):
"""Function for estimating the memory usage of a dictionary"""
return sum(memory.values())
Output:
A*
013289
Memory Bounded A* 48
48
98*
SyntaxError: incomplete input 9*
SyntaxError: incomplete input 8
8
87/
SyntaxError: incomplete input 7
7
8
8

Result:
Exp.No:03
IMPLEMENTING NAÏVE BAYES
Date:

Aim:

Algorithm:
Program:

import math
import random
import csv
def encode_class(mydata):
classes = []
for i in range(len(mydata)):
if mydata[i][-1] not in classes:
classes.append(mydata[i][-1])
for i in range(len(classes)):
for j in range(len(mydata)):
if mydata[j][-1] == classes[i]:
mydata[j][-1] = i
return mydata
def splitting(mydata, ratio):
train_num = int(len(mydata) * ratio)
train = []
test = list(mydata)
while len(train) < train_num:
index = random.randrange(len(test))
train.append(test.pop(index))
return train, test
def groupUnderClass(mydata):
dict = {}
for i in range(len(mydata)):
if (mydata[i][-1] not in dict):
dict[mydata[i][-1]] = []
dict[mydata[i][-1]].append(mydata[i])
return dict
def mean(numbers):
return sum(numbers) / float(len(numbers))
def std_dev(numbers):
avg = mean(numbers)
variance = sum([pow(x - avg, 2) for x in numbers]) / float(len(numbers) - 1)
return math.sqrt(variance)
def MeanAndStdDev(mydata):
info = [(mean(attribute), std_dev(attribute)) for attribute in zip(*mydata)]
del info[-1]
return info
def MeanAndStdDevForClass(mydata):info = {}
dict = groupUnderClass(mydata)
for classValue, instances in dict.items():
info[classValue] = MeanAndStdDev(instances)
return info
def calculateGaussianProbability(x, mean, stdev):
expo = math.exp(-(math.pow(x - mean, 2) / (2 * math.pow(stdev, 2))))
return (1 / (math.sqrt(2 * math.pi) * stdev)) * expo
def calculateClassProbabilities(info, test):
probabilities = {}
for classValue, classSummaries in info.items():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, std_dev = classSummaries[i]
x = test[i]
probabilities[classValue] *= calculateGaussianProbability(x, mean, std_dev)
return probabilities
def predict(info, test):
probabilities = calculateClassProbabilities(info, test)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.items():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
def getPredictions(info, test):
predictions = []
for i in range(len(test)):
result = predict(info, test[i])
predictions.append(result)
return predictions
def accuracy_rate(test, predictions):
correct = 0
for i in range(len(test)):
if test[i][-1] == predictions[i]:
correct += 1
return (correct / float(len(test))) * 100.0
# driver code
filename = r'C:\Users\exam2\Desktop\thi.csv'
split_ratio = 0.7
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
mydata = list(lines)
for i in range(len(mydata)):
mydata[i] = [float(x) for x in mydata[i]]
train_data, test_data = splitting(mydata, split_ratio)
print('Total number of examples are: ', len(mydata))
print('Out of these, training examples are: ', len(train_data))
print("Test examples are: ", len(test_data))
info = MeanAndStdDevForClass(train_data)
predictions = getPredictions(info, test_data)
accuracy = accuracy_rate(test_data, predictions)
print("Accuracy of your model is: ", accuracy)
Output:

Total number of examples are: 200


Out of these, training examples are: 140
Test examples are: 60
Accuracy of your model is: 71.2376788

Result:
Exp.No:04
IMPLEMENTING BAYESIAN NETWORKS
Date:

Aim:

Algorithm:
Program:
import pandas as pd
from pgmpy.models import BayesianNetwork
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.inference import VariableElimination
# Define a small dataset
data = pd.DataFrame(data={'age': [23, 50, 30, 40, 35, 52, 47],
'sex': [1, 0, 1, 1, 0, 1, 0],
'cp': [3, 2, 1, 1, 2, 3, 1],
'trestbps': [145, 130, 120, 140, 160, 150, 130],
'chol': [233, 250, 204, 236, 354, 192, 294],
'fbs': [1, 0, 0, 0, 0, 1, 0],
'restecg': [0, 1, 0, 1, 1, 1, 0],
'thalach': [150, 187, 172, 178, 163, 148, 153],
'exang': [0, 0, 0, 0, 1, 0, 0],
'oldpeak': [2.3, 3.5, 1.4, 0.8, 0.6, 0.4, 1.3],
'slp': [0, 0, 2, 2, 2, 1, 1],
'caa': [0, 0, 0, 0, 0, 0, 0],
'thall': [1, 2, 2, 2, 2, 1, 2],
'output': [1, 1, 1, 1, 1, 1, 1]})
# Define the structure of the model
model = BayesianNetwork([('age', 'trestbps'), ('sex', 'chol'), ('cp', 'fbs'), ('restecg', 'thalach'),
('exang', 'oldpeak'), ('slp', 'caa'), ('thall', 'output')])
# Fit the data to the model
model.fit(data, estimator=MaximumLikelihoodEstimator)
# Perform inference on the model
infer = VariableElimination(model)
# Query the 'output' variable given some evidence
q = infer.query(variables=['output'], evidence={'age': 50, 'sex': 1})
# Print the query result
print(q)
Output:

+-----------+---------------+
| output | phi(output) |
+===========+===============+
| output(1) | 1.0000 |
+-----------+---------------+

Result:
Exp.No:05
BUILDING THE REGRESSION MODEL
Date:

Aim:

Algorithm:
Program:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Create a simple dataset (you can replace this with your own data)
x = np.array([5, 7, 8, 7, 2, 17, 2, 9, 4, 11, 12, 9, 6]) # Age of cars
y = np.array([99, 86, 87, 88, 111, 86, 103, 87, 94, 78, 77, 85, 86]) # Speed of cars
# Calculate mean of x and y
x_mean = np.mean(x)
y_mean = np.mean(y)
# Calculate slope (m) and intercept (c) for the regression line
numerator = np.sum((x - x_mean) * (y - y_mean))
denominator = np.sum((x - x_mean) ** 2)
slope = numerator / denominator
intercept = y_mean - slope * x_mean
# Create the regression line function
def predict_speed(age):
return slope * age + intercept
# Make predictions for the entire dataset
predicted_speeds = predict_speed(x)
# Calculate R-squared value
total_variance = np.sum((y - y_mean) ** 2)
residual_variance = np.sum((y - predicted_speeds) ** 2)
r_squared = 1 - (residual_variance / total_variance)
print(f"Slope (m): {slope:.2f}")
print(f"Intercept (c): {intercept:.2f}")
print(f"R-squared value: {r_squared:.2f}")
# Visualize the regression line
plt.scatter(x, y, label="Actual data")
plt.plot(x, predicted_speeds, color="red", label="Regression line")
plt.xlabel("Age of Cars")
plt.ylabel("Speed (mph)")
plt.title("Simple Linear Regression from Scratch")
plt.legend() plt.show()
Out Put:
Slope (m): -1.75
Intercept (c): 103.11
R-squared value: 0.58

Result:
Exp.No:06
BUILDING DECISION TREES
Date:

Aim:

Algorithm:
Program:
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# Load your dataset
# data = pd.read_csv('path_to_your_data.csv')
# For demonstration, let's create a simple dataset
data = pd.DataFrame({
'Feature1': [1, 2, 3, 4, 5],
'Feature2': [5, 4, 3, 2, 1],
'Target': [1.2, 2.1, 3.5, 4.8, 5.6]
})
# Split the data into features and target
X = data.drop('Target', axis=1)
y = data['Target']
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)
# Initialize the Decision Tree Regressor
decision_tree =
DecisionTreeRegressor(random_state=42)
# Fit the model on the training data
decision_tree.fit(X_train, y_train)
# Predict on the test data
y_pred_tree = decision_tree.predict(X_test)
# Calculate the Mean Squared Error for the Decision
Tree
mse_tree = mean_squared_error(y_test, y_pred_tree)
print(f'Decision Tree Mean Squared Error:
{mse_tree:.4f}')
# Initialize the Random Forest Regressor
random_forest =
RandomForestRegressor(random_state=42)
# Fit the model on the training data
random_forest.fit(X_train, y_train)
# Predict on the test data
y_pred_forest = random_forest.predict(X_test)
# Calculate the Mean Squared Error for the Random
Forest
mse_forest = mean_squared_error(y_test,
y_pred_forest)
print(f'Random Forest Mean Squared Error:
{mse_forest:.4f}')
Output:
Decision Tree Mean Squared Error: 0.8100
Random Forest Mean Squared Error: 0.1475

Result:
Exp.No:07
BUILDING SVM MODEL
Date:

Aim:

Algorithm:
Program:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
# Load the Fish dataset
dataset_url =
"https://raw.githubusercontent.com/harikabonthu/SupportVectorClassifier/main/datasets_229
906_491820_Fish.csv"
fish = pd.read_csv(dataset_url)
# Split the data into training and testing sets
X = fish.drop(columns=['Species']) # Features
y = fish['Species'] # Target variable
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train an SVM model with a linear kernel
svm_linear = SVC(kernel='linear')
svm_linear.fit(X_train, y_train)
# Predict the species labels for the test set
y_pred = svm_linear.predict(X_test)
# Evaluate the model's accuracy
accuracy = accuracy_score(y_test, y_pred)
print(f'Linear SVM accuracy: {accuracy:.2f}')
Output:

Linear SVM accuracy: 0.96

Result:
Exp.No:08
IMPLEMENT ENSEMBLING TECHNIQUES
Date:

Aim:

Algorithm:
Program:
from sklearn.model_selection import train_test_split
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score
# 1. Load the dataset and split it into training and testing sets
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15,
n_redundant=5, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 2. Choose the base models to be included in the ensemble
log_clf = LogisticRegression(random_state=42)
svm_clf = SVC(probability=True, random_state=42)
dt_clf = DecisionTreeClassifier(random_state=42)
# 3. Train each base model on the training set
for clf in (log_clf, svm_clf, dt_clf):
clf.fit(X_train, y_train)
# 4. Combine the predictions of the base models using the chosen ensembling technique
voting_clf = VotingClassifier(estimators=[('lr', log_clf), ('svc', svm_clf), ('dt', dt_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
# 5. Evaluate the performance of the ensemble model on the testing set
y_pred = voting_clf.predict(X_test)
print(f'Ensemble model accuracy: {accuracy_score(y_test, y_pred)}')
Output:

Ensemble model accuracy: 0.895

Result:
Exp.No:09
IMPLEMENT CLUSTERING ALGORITHMS
Date:

Aim:

Algorithm:
Program:
import warnings
warnings.filterwarnings('ignore', category=UserWarning, module='sklearn')
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans, AgglomerativeClustering
import matplotlib.pyplot as plt
# Generate a random dataset with 100 samples and 4 clusters
X, y = make_blobs(n_samples=100, centers=4, random_state=42)
# Create a K-Means clustering object with 4 clusters
kmeans = KMeans(n_clusters=4, random_state=42)
# Fit the K-Means model to the dataset
kmeans.fit(X)
# Create a scatter plot of the data colored by K-Means cluster assignment
plt.scatter(X[:, 0], X[:, 1], c=kmeans.labels_)
plt.title("K-Means Clustering")
plt.show()
# Create a Hierarchical clustering object with 4 clusters
hierarchical = AgglomerativeClustering(n_clusters=4)
# Fit the Hierarchical model to the dataset
hierarchical.fit(X)
# Create a scatter plot of the data colored by Hierarchical cluster assignment
plt.scatter(X[:, 0], X[:, 1], c=hierarchical.labels_)
plt.title("Hierarchical Clustering")
plt.show()
Output:
Result:
Exp.No:10
IMPLEMENT THE EXPECTATION-MAXIMIZATION (EM)
Date:

Aim:

Algorithm:
Program:
from pgmpy.models import BayesianNetwork
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.inference import VariableElimination
from pgmpy.factors.discrete import TabularCPD
import numpy as np
import pandas as pd
# Define the structure of the Bayesian network
model = BayesianNetwork([('C', 'S'), ('D', 'S')])
# Define the conditional probability distributions (CPDs)
cpd_c = TabularCPD('C', 2, [[0.5], [0.5]])
cpd_d = TabularCPD('D', 2, [[0.5], [0.5]])
cpd_s = TabularCPD('S', 2, [[0.8, 0.6, 0.6, 0.2], [0.2, 0.4, 0.4,
0.8]],
evidence=['C', 'D'], evidence_card=[2, 2])
# Add the CPDs to the model
model.add_cpds(cpd_c, cpd_d, cpd_s)
# Generate some data
data = np.random.randint(low=0, high=2, size=(5000, 3))
# Convert the numpy ndarray to a pandas DataFrame
data = pd.DataFrame(data, columns=['C', 'D', 'S']) # Add
column names
# Create a Maximum Likelihood Estimator
mle = MaximumLikelihoodEstimator(model, data)
# Estimate the CPDs for all variables in the model
model_estimated = mle.get_parameters()
# Create a Variable Elimination object to perform inference
infer = VariableElimination(model)
# Perform inference on some observed evidence
query = infer.query(['S'], evidence={'C': 1})
print(query)
Output:

+------+----------+
| S | phi(S) |
+======+==========+
| S(0) | 0.4000 |
+------+----------+
| S(1) | 0.6000 |
+------+----------+

Result:
Exp.No:11
BUILD SIMPLE NN MODELS
Date:

Aim:

Algorithm:
Program:
import tensorflow as tf
from tensorflow import keras
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Normalize the input data
x_train = x_train / 2.0
x_test = x_test / 2.0
# Define the model architecture
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(130, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('Test accuracy:', test_acc)
Output:
Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 755us/step - accuracy: 0.836
3 - loss: 3.8895 - val_accuracy: 0.9264 - val_loss: 0.3152
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 713us/step - accuracy: 0.933
5 - loss: 0.2749 - val_accuracy: 0.9382 - val_loss: 0.2474
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 710us/step - accuracy: 0.945
0 - loss: 0.2148 - val_accuracy: 0.9473 - val_loss: 0.2126
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 698us/step - accuracy: 0.952
3 - loss: 0.1770 - val_accuracy: 0.9513 - val_loss: 0.2109
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 715us/step - accuracy: 0.959
7 - loss: 0.1538 - val_accuracy: 0.9445 - val_loss: 0.2121
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 711us/step - accuracy: 0.963
0 - loss: 0.1382 - val_accuracy: 0.9457 - val_loss: 0.2214
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 706us/step - accuracy: 0.967
6 - loss: 0.1239 - val_accuracy: 0.9584 - val_loss: 0.1854
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 710us/step - accuracy: 0.965
7 - loss: 0.1300 - val_accuracy: 0.9588 - val_loss: 0.1870
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 699us/step - accuracy: 0.970
0 - loss: 0.1132 - val_accuracy: 0.9524 - val_loss: 0.2495
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 705us/step - accuracy: 0.970
1 - loss: 0.1165 - val_accuracy: 0.9530 - val_loss: 0.2668
313/313 - 0s - 458us/step - accuracy: 0.9530 - loss: 0.2668
Test accuracy: 0.953000009059906
Result:
Exp.No:12
BUILD DEEP LEARNING NN MODELS
Date:

Aim:

Algorithm:
Program:
import tensorflow as tf
from tensorflow import keras
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Normalize the input data
x_train = x_train / 255.0
x_test = x_test / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('Test accuracy:', test_acc)
Output:
Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 741us/step - accuracy: 0.857
8 - loss: 0.4806 - val_accuracy: 0.9591 - val_loss: 0.1417
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 697us/step - accuracy: 0.954
9 - loss: 0.1550 - val_accuracy: 0.9656 - val_loss: 0.1127
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 698us/step - accuracy: 0.967
0 - loss: 0.1083 - val_accuracy: 0.9728 - val_loss: 0.0878
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 699us/step - accuracy: 0.972
6 - loss: 0.0911 - val_accuracy: 0.9758 - val_loss: 0.0792
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 690us/step - accuracy: 0.975
7 - loss: 0.0746 - val_accuracy: 0.9767 - val_loss: 0.0750
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 695us/step - accuracy: 0.979
8 - loss: 0.0636 - val_accuracy: 0.9786 - val_loss: 0.0694
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 719us/step - accuracy: 0.983
2 - loss: 0.0542 - val_accuracy: 0.9803 - val_loss: 0.0651
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 691us/step - accuracy: 0.984
0 - loss: 0.0510 - val_accuracy: 0.9780 - val_loss: 0.0738
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 697us/step - accuracy: 0.985
8 - loss: 0.0451 - val_accuracy: 0.9791 - val_loss: 0.0726
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 695us/step - accuracy: 0.985
8 - loss: 0.0432 - val_accuracy: 0.9800 - val_loss: 0.0706
313/313 - 0s - 350us/step - accuracy: 0.9800 - loss: 0.0706
Test accuracy: 0.9800000190734863
Result:

You might also like