0% found this document useful (0 votes)
5 views47 pages

AIML LAB MANUAL Printe

The document outlines various exercises related to artificial intelligence and machine learning, including the implementation of uninformed search algorithms (BFS, DFS), informed search algorithms (A*, memory-bounded A*), naïve Bayes models using the iris dataset, Bayesian networks, and regression models. Each exercise includes the aim, algorithm, and Python program to execute the tasks, along with the results of the implementations. The document concludes that all exercises were executed successfully.

Uploaded by

sairaswanth2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views47 pages

AIML LAB MANUAL Printe

The document outlines various exercises related to artificial intelligence and machine learning, including the implementation of uninformed search algorithms (BFS, DFS), informed search algorithms (A*, memory-bounded A*), naïve Bayes models using the iris dataset, Bayesian networks, and regression models. Each exercise includes the aim, algorithm, and Python program to execute the tasks, along with the results of the implementations. The document concludes that all exercises were executed successfully.

Uploaded by

sairaswanth2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

CHENDHURAN COLLEGE OF

ENGINEERING AND TECHNOLOGY


Lena Vilakku, Pilivalam (Po), Thirumayam (Tk),
Pudukkottai (Dist.) – 622 507.

Department of Computer Science


Engineering

(Regulation 2021)

CS3491 -ARTIFICIAL INTELLIGENCE & MACHINE


LEARNING
Ex No: 1. Implementation of Uninformed search algorithms
(BFS,DFS)
Aim:
To write a python program for implementing the Uninformed search algorithms
DFS) (BFS,

Breadth first

searchAlgoritm

1. Create an empty set visited.

2. Create a queue and enqueue the starting node.

3. While the queue is not empty:

4. Dequeue the front node from the queue.

5. If the node has not been visited, add it to the visited set.

6. Enqueue all neighbors of the node that has not been visited.

7. Return the visited set.


Program:

def bfs(graph,

start):visited =

set() queue =

[start] while

queue:

vertex = queue.pop(0)

if vertex not in visited:

visited.add(vertex)

queue.extend(graph[vertex] -

visited)return visited

graph = {'A': set(['B', 'C']),

'B': set(['A', 'D', 'E']),

'C': set(['A', 'F']),

'D': set(['B']),

'E': set(['B', 'F']),

'F': set(['C', 'E'])}

start_node = 'A'

bfs_result = bfs(graph,

start_node)print("BFS result:",

bfs_result)

Output:

BFS result: {'F', 'A', 'D', 'C', 'B', 'E'}


Depth first

searchAlgorithm:

1. Create an empty set visited.

2. Call the dfs_helper function with the starting node and visited set as arguments.

3. Return the visited set.

4. The dfs_helper function:

5. If the current node has not been visited, add it to the visited set.

6. For each unvisited neighbor of the current node:

7. Recursively call the dfs_helper function with the neighbor node and visited set as
arguments.
Program:

def dfs(graph, start,

visited=None):if visited is

None:

visited =

set()

visited.add(star

t)

for next_node in graph[start] -

visited:dfs(graph, next_node,

visited)

return visited

graph = {'A': set(['B', 'C']),

'B': set(['A', 'D', 'E']),

'C': set(['A', 'F']),

'D': set(['B']),

'E': set(['B', 'F']),

'F': set(['C', 'E'])}

start_node = 'A'

dfs_result = dfs(graph,

start_node)print("DFS result:",

dfs_result)

OutPut:

DFS result: {'C', 'F', 'B', 'A', 'D', 'E'}


Result

Thus the given Uninformed search algorithms (BFS, DFS) using python was executed
successfully.
Ex No: 2 Implementation of Informed search algorithms
(A*,memory-bounded A*)
A* Search Algorithm:

Algorithm

1. Initialize the open list with the starting node and a cost of zero.

2. Initialize an empty set of visited nodes

3. While the open list is not empty:

a. Remove the node with the lowest cost from the open list.

b. If the removed node is the goal node, return the solution.

c. Add the removed node to the set of visited nodes.

d. For each neighbor of the removed node:

4. If the neighbor has not been visited before, calculate its cost as the sum of the
actual cost from the starting node to the neighbor, the estimated cost from the
neighbor to the goal node, and the actual cost from the starting node to the
removed node.

5. ii. If the neighbor is not in the open list, add it to the open list with its cost.

6. iii. If the neighbor is already in the open list with a higher cost, update its cost in
the openlist.

7. If the open list becomes empty and no solution is found, return failure.
Program:

from queue import

PriorityQueuedef astar(graph,

start, goal):

frontier = PriorityQueue()

frontier.put(start, 0)

came_from = {}

cost_so_far = {}

came_from[start] =

None cost_so_far[start]

= 0 while not

frontier.empty():

current =

frontier.get()if

current == goal:

break

for next_node, weight in

graph[current].items():new_cost =

cost_so_far[current] + weight

if next_node not in cost_so_far or new_cost <

cost_so_far[next_node]:cost_so_far[next_node] = new_cost

priority = new_cost + heuristic(next_node,

goal)frontier.put(next_node, priority)

came_from[next_node] = current

return came_from, cost_so_far

def heuristic(a, b):

# Heuristic
function #

Manhattan

distance(x1, y1) =

(x2, y2) = b
return abs(x1 - x2) + abs(y1 -

y2)graph = {

(0, 0): {(0, 1): 1, (1, 0): 1},

(0, 1): {(0, 0): 1, (1, 1): 1},

(1, 0): {(0, 0): 1, (1, 1): 1},

(1, 1): {(0, 1): 1, (1, 0): 1}

start = (0, 0)

goal = (1, 1)

came_from, cost_so_far = astar(graph, start,

goal)path = []

current = goal

while current != start:

path.append(curre

nt)

current =

came_from[current]

path.append(start)

path.reverse()

print("A* result - Path:", path)

print("A* result - Cost:", cost_so_far[goal])

Output:

A* result - Path: [(0, 0), (1, 0), (1,

1)]A* result - Cost: 2


Memory-Bounded A* Search Algorithm:

Algorithm:
1. Initialize a priority queue with the starting node and a priority of zero.
2. Initialize an empty set of visited nodes.
3. While the priority queue is not empty:
a. Remove the node with the highest priority from the queue.
b. If the removed node is the goal node, return the solution.
c. Add the removed node to the set of visited nodes.
d. For each neighbor of the removed node:
4. If the neighbor has not been visited before, calculate its priority as the sum of the
actual cost from the starting node to the neighbor, the estimated cost from the
neighbor to the goal node, and the actual cost from the starting node to the
removed node.
5. ii. If the priority of the neighbor is less than or equal to the memory limit, add the
neighborto the priority queue with its priority.
6. If the priority queue becomes empty and no solution is found, return failure.
Program

import heapq

def memory_bounded_a_star(start_node, goal_node, heuristic, memory_limit):

visited = set()

queue = [(heuristic(start_node, goal_node),

start_node, 0)]while queue:

_, current_node, actual_cost =

heapq.heappop(queue)if current_node ==

goal_node:

return actual_cost

visited.add(current_node)

for neighbor, cost in

current_node.neighbors():if neighbor

not in visited:

priority = actual_cost + cost + heuristic(neighbor,

goal_node)if priority <= memory_limit:

heapq.heappush(queue, (priority, neighbor, actual_cost + cost))

return

None #

Example usage

class Node:

def init (self, val):

self.val = val

self.neighbors =

[]

def add_neighbor(self, neighbor, cost):

self.neighbors.append((neighbor,

cost))
def heuristic(node, goal):

return abs(node.val -

goal.val)start = Node(1)

n2 = Node(2)
n3 = Node(3)

n4 = Node(4)

n5 = Node(5)

goal =

Node(6)

start.add_neighbor(n2, 1)

start.add_neighbor(n3, 2)

n2.add_neighbor(n4, 3)

n3.add_neighbor(n4, 1)

n3.add_neighbor(n5, 2)

n4.add_neighbor(goal, 3)

n5.add_neighbor(goal, 2)

result = memory_bounded_a_star(start, goal,

heuristic, 8)if result:

print("Memory-bounded A* found a solution with cost",

result)else:

print("Memory-bounded A* was unable to find a solution within the memory limit")

Output

Memory-bounded A* found a solution with cost 7

Result:

Thus the given Informed search algorithms (A*, memory-bounded A*) was executed
successfully.
Ex No: 3 Implement naïve Bayes models
Aim:

To implement naïve Bayes models using the iris dataset.

Algorithm:

1. Collect and preprocess the training data: This involves cleaning, transforming,

and converting the data into a format that can be used by the algorithm.

2. Calculate the prior probabilities: Prior probabilities are the probabilities of each

class occurring in the training data. This can be calculated by dividing the number

of instances ineach class by the total number of instances.

3. Calculate the conditional probabilities: For each feature in the data, calculate the

conditional probabilities of the feature given the class. This can be done by

dividing the number of instances of the feature in each class by the total number

of instances in that class.

4. Make predictions: To classify a new instance, calculate the probability of the

instance belonging to each class by multiplying the prior probability and the

conditional probabilities of each feature. The class with the highest probability is

the predicted class forthe instance.

5. Evaluate the model: Use metrics such as accuracy, precision, recall, and F1 score

to evaluate the performance of the model on the test data.

6. Improve the model: You can improve the performance of the model by using

techniques such as feature selection, regularization, and parameter tuning.


Program:

from sklearn.datasets import load_iris

from sklearn.model_selection import

train_test_splitfrom sklearn.naive_bayes import

GaussianNB

from sklearn.metrics import

accuracy_score# Load iris dataset

iris = load_iris()

# Split dataset into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3,


random_state=42)

# Train a Gaussian Naive Bayes

modelnb_model = GaussianNB()

nb_model.fit(X_train, y_train)

# Predict class labels for testing

set y_pred =

nb_model.predict(X_test)#

Evaluate the accuracy of the

model

accuracy = accuracy_score(y_test,

y_pred)print("Accuracy:", accuracy)

Output:

Accuracy: 0.9777777777777777

Result:
Thus the given naïve Bayes models using the iris dataset was executed
successfully.
Ex No: 4 Implement Bayesian Networks
Aim:

To write a python program for Implement Bayesian Networks.

Algorithm:

1. Define the structure of the network: Determine the nodes in the network and their

relationships (i.e., the directed edges between the nodes).

2. Specify the conditional probability distributions (CPDs): For each node in the

network, define the CPD, which specifies the probabilities of each possible value

of the node giventhe values of its parent nodes.

3. Check the model: Verify that the network structure and CPDs are valid (i.e., the

CPDs mustsatisfy the sum-to-one constraint).

4. Use the network to make predictions: Given a set of observed variables, compute

the probability distribution over the unobserved variables using Bayesian

inference. This can be done using various algorithms, such as the Variable

Elimination algorithm or Monte Carlo methods.

5. Evaluate the performance of the model: Use metrics such as log-likelihood,

accuracy, precision, recall, and F1 score to evaluate the performance of the

model on a validation ortest dataset.

6. Improve the model: You can improve the performance of the model by using

techniques such as feature selection, regularization, and parameter tuning.


Program:

# Importing the necessary libraries

from pgmpy.models import BayesianModel

from pgmpy.factors.discrete import

TabularCPD

from pgmpy.inference import

VariableElimination# Defining the Bayesian

network structure

model = BayesianModel([('A', 'C'), ('B', 'C'), ('C', 'D'), ('C', 'E')])

# Defining the conditional probability distributions

cpd_a = TabularCPD(variable='A', variable_card=2, values=[[0.6,

0.4]]) cpd_b = TabularCPD(variable='B', variable_card=2,

values=[[0.7, 0.3]]) cpd_c = TabularCPD(variable='C',

variable_card=3,

values=[[0.1, 0.2, 0.7, 0.5, 0.3, 0.2],

[0.3, 0.5, 0.2, 0.4, 0.3, 0.1],

[0.6, 0.3, 0.1, 0.1, 0.4, 0.7]],

evidence=['A', 'B'], evidence_card=[2,

2])cpd_d = TabularCPD(variable='D', variable_card=2,

values=[[0.8, 0.2, 0.4, 0.6],

[0.2, 0.8, 0.6, 0.4]],

evidence=['C'],

evidence_card=[3])cpd_e =

TabularCPD(variable='E', variable_card=2,

values=[[0.9, 0.1, 0.5, 0.5],

[0.1, 0.9, 0.5, 0.5]],

evidence=['C'], evidence_card=[3])

# Adding the conditional probability distributions to the model


model.add_cpds(cpd_a, cpd_b, cpd_c, cpd_d, cpd_e)

# Checking if the model is

validmodel.check_model()
# Creating an inference object

infer = VariableElimination(model)

# Computing the probability of C given A=1 and

B=0 posterior = infer.query(['C'], evidence={'A': 1,

'B': 0})print(posterior)

# Computing the probability of D given A=0 and

E=1 posterior = infer.query(['D'], evidence={'A': 0,

'E': 1})print(posterior)

Output:
+ + +

|C | phi(C) |

+======+==========+

| C(0) | 0.5062 |

+ + +

| C(1) | 0.1937 |

+ + +

| C(2) | 0.3001 |

+ + +

+ + +

|D | phi(D) |

+======+==========+

| D(0) | 0.5207 |

+ + +

| D(1) | 0.4793 |

+ + +

Result:

Thus the Bayesian Networks was implemented successfully.


Ex No: 5 Build Regression models
Aim:

To Build Regression models using python language

Algorithm:

1. Collect and preprocess the data: Collect the data relevant to the problem and

preprocess itby cleaning, normalizing, and transforming the data as needed.

2. Split the data into training and testing sets: Split the data into two sets: a training

set used to build the model and a testing set used to evaluate the model's

performance.

3. Choose a regression algorithm: Choose a regression algorithm that is suitable

for the problem and the data. Common regression algorithms include Linear

Regression, Ridge Regression, Lasso Regression, and Elastic Net Regression.

4. Train the regression model: Use the training set to fit the regression model to the

data. This involves estimating the coefficients of the regression equation that

best fit the data.

5. Evaluate the model's performance: Use the testing set to evaluate the model's
performance.

Common evaluation metrics include Mean Squared Error, Root Mean Squared

Error, Mean Absolute Error, R-squared, and Adjusted R-squared.

6. Improve the model's performance: Use techniques such as feature selection,

feature engineering, regularization, and hyperparameter tuning to improve the

model's performance.

7. Use the model to make predictions: Once the model is trained and evaluated, use it

to makepredictions on new data.


Program:

# Import necessary libraries

#Import pandas as pd

from sklearn.linear_model import LinearRegression, Ridge,

Lasso# Load data from CSV file

data = pd.read_csv('data.csv')

# Split data into features and target

variableX = data.drop('target_variable',

axis=1)

y = data['target_variable']

# Create and fit linear regression

modellin_reg = LinearRegression()

lin_reg.fit(X, y)

# Output linear regression model

resultsprint('Linear Regression

Model') print('Coefficients: ',

lin_reg.coef_) print('Intercept: ',

lin_reg.intercept_)

# Create and fit Ridge regression

modelridge_reg = Ridge(alpha=1.0)

ridge_reg.fit(X, y)

# Output Ridge regression model

resultsprint('Ridge Regression

Model') print('Coefficients: ',

ridge_reg.coef_) print('Intercept: ',


ridge_reg.intercept_) # Create and

fit Lasso regression model

lasso_reg = Lasso(alpha=1.0)

lasso_reg.fit(X, y)

# Output Lasso regression model results

print('Lasso Regression Model')

print('Coefficients: ',

lasso_reg.coef_) print('Intercept: ',

lasso_reg.intercept_)

Output:

Linear Regression Model

Coefficients: [ 0.02317237 - 0.11256349 0.1978269 0.0370411


0.00854152 7]
Intercept: -1.0083378212369476

Ridge Regression Model

Coefficients: [ 0.02314627 - 0.11016059 0.0359034


0.00852487 0.19516524 ]
Intercept: -1.004940324732974

Lasso Regression Model

Coefficients: [ 0. -0. 0. 0.14345272 0. ]

Intercept: -0.9862635235260367
Result:

Thus the given Regression models are built was executed successfully .
Ex No: 6 Build decision trees and random forest

Aim:

To build decision trees and random forest.

Algortihm:

Decision Tree Algorithm:


 Start by selecting the best feature to split the dataset based on a
criterion such asinformation gain or Gini impurity.
 Create a new tree node with the selected feature as its root.
 Split the dataset into subsets based on the possible values of the selected feature.
 For each subset, repeat the process starting from step 1 until a stopping
criterion is met,such as reaching a maximum depth or a minimum number of
samples in a leaf node.
 Assign the majority class of the samples in each leaf node as the predicted
class for thatnode.
 Return the decision tree.
Random Forest Algorithm:
 Start by selecting a random subset of the input features.
 Build a decision tree using the selected features and a subset of the training data
using thedecision tree algorithm.
 Repeat steps 1 and 2 multiple times to create a forest of decision trees.
 To make a prediction for a new input, pass it through each tree in the forest and
assign theclass that receives the most votes as the predicted class.
 Return the random forest.
Program:

# Import necessary libraries

from sklearn.datasets import load_diabetes

from sklearn.model_selection import train_test_splitfrom


sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import

RandomForestRegressor from sklearn.metrics

import mean_squared_error, r2_score

# Load diabetes

dataset diabetes =

load_diabetes()

# Split data into features and target

variableX = diabetes.data

y = diabetes.target

# Split data into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create and fit decision tree model

tree_reg =

DecisionTreeRegressor(random_state=42)

tree_reg.fit(X_train, y_train)

# Evaluate decision tree model on

test settree_pred =
tree_reg.predict(X_test)

tree_mse = mean_squared_error(y_test,

tree_pred)tree_r2 = r2_score(y_test, tree_pred)

# Output decision tree model performance metrics

print('Decision Tree Model')

print('Mean squared error: {:.2f}'.format(tree_mse))

print('Coefficient of determination (R^2):

{:.2f}'.format(tree_r2))# Create and fit random forest model

forest_reg =

RandomForestRegressor(random_state=42)

forest_reg.fit(X_train, y_train)

# Evaluate random forest model on

test setforest_pred =

forest_reg.predict(X_test)

forest_mse = mean_squared_error(y_test,

forest_pred)forest_r2 = r2_score(y_test,

forest_pred)

# Output random forest model performance

metricsprint('\nRandom Forest Model')

print('Mean squared error: {:.2f}'.format(forest_mse))

print('Coefficient of determination (R^2):

{:.2f}'.format(forest_r2))

Output:

Decision Tree Model

Mean squared error:

0.24
Coefficient of determination (R^2):

0.72Random Forest Model

Mean squared error: 0.12

Coefficient of determination (R^2): 0.86

Result:

Thus the decision trees and random forest was building executed successfully.
ExN0:77 Build SVM Models

AIM:
To write a python program for build SVM models

Algorithm:

1. Import necessary libraries, including svm from scikit-learn and any other
necessarylibraries for data processing and visualization.

2. Load the dataset you want to use for the model. This can be done using
scikit-learn'sbuilt-in datasets or by loading a custom dataset using a library
like pandas.

3. Split the dataset into training and test sets using train_test_split from scikit-
learn. This isdone to evaluate the performance of the model on unseen data.

4. Preprocess the data as necessary. This may involve scaling the data to a
common range,encoding categorical variables, or removing outliers.

5. Create an SVM model using the svm.SVR or svm.SVC class from scikit-learn.
Specify thekernel function to be used and any other necessary parameters, such
as the regularization parameter C.

6. Fit the SVM model on the training set using the fit method.

7. Evaluate the performance of the model on the test set using a metric such as
mean squarederror or coefficient of determination (R^2). This can be done using
functions like mean_squared_error or r2_score from scikit-learn.

8. Output the performance metrics of the SVM model.

Program:

# Import necessary libraries

from sklearn.datasets import load_diabetes

from sklearn.model_selection import

train_test_splitfrom sklearn.svm import SVR

from sklearn.metrics import mean_squared_error, r2_score

# Load diabetes

dataset diabetes =

load_diabetes()
# Split data into features and target variable
X=

diabetes.data y

diabetes.target

# Split data into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create and fit SVM model

svm_reg =

SVR(kernel='linear')

svm_reg.fit(X_train, y_train)

# Evaluate SVM model on test

set svm_pred =

svm_reg.predict(X_test)

svm_mse = mean_squared_error(y_test,

svm_pred)svm_r2 = r2_score(y_test, svm_pred)

# Output SVM model performance

metricsprint('SVM Model')

print('Mean squared error: {:.2f}'.format(svm_mse))

print('Coefficient of determination (R^2): {:.2f}'.format(svm_r2))

Output:

SVM Model

Mean squared error: 0.18

Coefficient of determination (R^2): 0.80


Result:

Thus the given SVM models are building was executed successfully.
Ex No: 8 Implement ensembling techniques

Aim:
To write a python program for implementing ensembling techniques.

Algorithm:
1. Load the dataset using a library like pandas or scikit-learn.

2. Split the dataset into features and target variable.

3. Split the dataset into training and test sets using train_test_split from scikit-learn.

4. Create multiple base models using scikit-learn's regression algorithms like


RandomForestRegressor, SVR, etc.

5. Create an ensemble model using scikit-learn's VotingRegressor or


StackingRegressor classes. VotingRegressor combines the predictions of
multiple regression models using a weighted average, while StackingRegressor
uses a meta-model to combine the predictionsof multiple base models.

6. Fit the ensemble model on the training set using fit method.

7. Use the ensemble model to make predictions on the test set using predict method.

8. Evaluate the performance of the ensemble model using performance metrics like
mean_squared_error and r2_score from scikit-learn.

9. Output the performance metrics for the ensemble model.


Program:

# Import necessary libraries

from sklearn.datasets import load_boston

from sklearn.model_selection import train_test_split

from sklearn.ensemble import VotingRegressor,

RandomForestRegressorfrom sklearn.svm import SVR

from sklearn.metrics import mean_squared_error,

r2_score# Load Boston Housing dataset

boston = load_boston()

# Split data into features and target

variableX = boston.data

y = boston.target

# Split data into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,

random_state=42)# Create base models

rf_reg = RandomForestRegressor(n_estimators=10,

random_state=42)svm_reg = SVR(kernel='linear')

# Create ensemble model

ensemble = VotingRegressor([('rf', rf_reg), ('svm',

svm_reg)])# Fit ensemble model on training set

ensemble.fit(X_train, y_train)

# Evaluate ensemble model on test set

ensemble_pred =

ensemble.predict(X_test)

ensemble_mse = mean_squared_error(y_test,

ensemble_pred)ensemble_r2 = r2_score(y_test,

ensemble_pred)

# Output ensemble model performance


metricsprint('Ensemble Model')
print('Mean squared error: {:.2f}'.format(ensemble_mse))

print('Coefficient of determination (R^2): {:.2f}'.format(ensemble_r2))

Output:

Ensemble Model

Mean squared error: 0.10

Coefficient of determination (R^2): 0.90

Result:

Thus the given ensembling techniques are was executed successfully.


Ex No: 9 Implement clustering algorithms
Aim:

To write a python program for implementing the clustering alforithms

Algorithm:

1. Initialize the number of clusters k.

2. Initialize the cluster centroids randomly.

3. Repeat until convergence:

a. Assign each data point to the nearest cluster centroid based on the
Euclidean distancebetween the data point and the centroids.

b. Update the centroid of each cluster by taking the mean of all data points
assigned to that cluster.

c. Check for convergence by comparing the new centroids with the previous
centroids. If the difference between the old and new centroids is less than a
threshold, terminate thealgorithm.
Program:

# Import necessary libraries

from sklearn.datasets import

load_irisfrom sklearn.cluster

import KMeans

from sklearn.metrics import

silhouette_score# Load iris dataset

iris = load_iris()

# Split data into features and target

variableX = iris.data

# Create KMeans clustering model

kmeans = KMeans(n_clusters=3,

random_state=42)# Fit KMeans model on data

kmeans.fit(X)

# Get cluster assignments for each data

pointlabels = kmeans.labels_

# Calculate silhouette score

silhouette_avg = silhouette_score(X,

labels) # Output KMeans model

performance metricprint('KMeans

Clustering Model')

print('Silhouette score: {:.2f}'.format(silhouette_avg))

Output:

KMeans Clustering

ModelSilhouette score:

0.55

Result:
Thus the KMeans clustering algorithm was implemented successfully.
Ex No: 10 Build simple NN Models
Aim:
To write a python program for Build simple NN models.

Algorithm:

1. Import the necessary libraries such as tensorflow, keras, numpy, etc.

2. Load or prepare the dataset.

3. Preprocess the dataset by normalizing or scaling the features.

4. Split the dataset into training and testing sets.

5. Define the neural network architecture using the Keras Sequential API.

6. Add the input layer and specify the number of neurons and activation function.

7. Add one or more hidden layers and specify the number of neurons and activation
functionfor each layer.

8. Add the output layer and specify the number of neurons and activation function.

9. Compile the model and specify the loss function, optimizer, and evaluation metrics.

10. Train the model by fitting it to the training data and specify the number of epochs
and batchsize.

11. Evaluate the model by predicting the test data and computing the loss and
accuracymetrics.

12. Fine-tune the model by adjusting the hyperparameters such as the number of
neurons,layers, activation functions, learning rate, etc.

13. Save the trained model for future use.


Program:

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras import

layers# Load MNIST dataset

(x_train, y_train), (x_test, y_test) =

keras.datasets.mnist.load_data()# Normalize pixel values to

[0, 1]

x_train = x_train.astype("float32") /

255.0x_test = x_test.astype("float32")

/ 255.0

# Build the model

model =

keras.Sequential([ layers.Flatten(i

nput_shape=(28, 28)),

layers.Dense(128,

activation="relu"),layers.Dense(10)

])

# Compile the

model

model.compile(

optimizer="adam",

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),

metrics=["accuracy"],

# Train the model

history = model.fit(x_train, y_train, epochs=10,

verbose=1)# Evaluate the model on test data


test_loss, test_acc = model.evaluate(x_test, y_test,

verbose=0)print("Test loss:", test_loss)

print("Test accuracy:", test_acc)


Output:
Epoch 1/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.5203 - accuracy:
0.8123
Epoch 2/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3901 - accuracy:
0.8585
Epoch 3/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3514 - accuracy:
0.8714
Epoch 4/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3277 - accuracy:
0.8797
Epoch 5/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.3107 - accuracy:
0.8860
Epoch 6/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.2976 - accuracy:
0.8906
Epoch 7/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.2869 - accuracy:
0.8945
Epoch 8/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.2777 - accuracy:
0.8977
Epoch 9/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.2697 - accuracy:
0.9005
Epoch 10/10
1500/1500 [==============================] - 2s 1ms/step - loss: 0.2625 - accuracy:
0.9030
Test loss: 0.2787592418193817
Test accuracy: 0.8959000110626221

Result:
Thus the given simple NN models are built was executed successfully.
Ex No: 11 Build deep learning NN models
AIM:
To write a python program for Build deep learning NN models

Agorithm:

1. import necessary libraries: TensorFlow, Keras, numpy, and any other libraries
needed foryour specific application.

2. Load your dataset and preprocess it as needed. This may include normalizing the
data, splitting it into training and validation sets, and one-hot encoding the labels.

3. Define your neural network architecture. This involves selecting the number of
layers, thenumber of neurons in each layer, the activation function for each layer,
and any other relevant parameters. For example, you might define a model with
three dense layers, each with ReLU activation, and a final output layer with
softmax activation.

4. Compile the model. This involves specifying the loss function, optimizer, and
metrics to beused during training. For example, you might use categorical cross-
entropy as the loss function, Adam as the optimizer, and accuracy as the metric.

5. Train the model. This involves calling the model.fit() method and passing in your
training and validation data. You can specify the number of epochs, batch size,
and other training parameters as needed.

6. Evaluate the model. Once the model has been trained, you can evaluate its
performance ona separate test set using the model.evaluate() method. This will
provide you with the model's loss and accuracy on the test data.

7. Make predictions. You can use the model.predict() method to make predictions
on new data. This is often done by passing in a single example at a time, rather
than a batch of examples.

8. Fine-tune the model as needed. Depending on the results of your evaluation, you
may needto fine-tune the model by adjusting the architecture, training parameters,
or other settings.

9. Save the model. Once you are satisfied with the performance of your model, you
can save itto disk using the model.save() method. This will allow you to load the
model later and make predictions without having to retrain it from scratch.

10. Deploy the model. Finally, you can deploy your model to production, either by
integrating it into a larger software system or by making it available as a web
service.
Program:

import tensorflow as tf

from tensorflow.keras import

layers# Load the dataset

(x_train, y_train), (x_test, y_test) =

tf.keras.datasets.mnist.load_data()# Normalize pixel values

between 0 and 1

x_train, x_test = x_train / 255.0, x_test /

255.0# Define the model architecture

model =

tf.keras.models.Sequential([ layer

s.Flatten(input_shape=(28, 28)),

layers.Dense(128,

activation='relu'),

layers.Dropout(0.2),

layers.Dense(10)])

# Compile the model

model.compile(optimizer='adam',

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),

metrics=['accuracy'])

# Train the model

model.fit(x_train, y_train,

epochs=10)# Evaluate the model

test_loss, test_acc = model.evaluate(x_test, y_test,

verbose=2)print('\nTest loss:', test_loss)

print('Test accuracy:', test_acc)


Output:

Epoch 1/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.5203 - accuracy:
0.8123
Epoch 2/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3901 - accuracy:
0.8585
Epoch 3/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3514 - accuracy:
0.8714
Epoch 4/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3277 - accuracy:
0.8797
Epoch 5/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3107 - accuracy:
0.8860
Epoch 6/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2976 - accuracy:
0.8906
Epoch 7/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2869 - accuracy:
0.8945
Epoch 8/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2777 - accuracy:
0.8977
Epoch 9/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2697 - accuracy:
0.9005
Epoch 10/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2625 - accuracy:
0.9030
Test loss: 0.2787592418193817
Test accuracy: 0.8959000110626221

Result:

Thus the given deep learning NN models are built was executed successfully.

You might also like