Aiml - Rec Final
Aiml - Rec Final
EX.NO:1
IMPLEMENTATION OF UNINFORMED SEARCH
DATE: ALGORITHMS
AIM:
The aim of implementing uninformed search algorithms such as Breadth-First Search
(BFS) and Depth-First Search (DFS) is to find a path or a solution from a given starting state
to a goal state in a search space.
A. BEST FIRST SEARCH (BFS)
ALGORITHM:
1. Initialize the queue with the starting node
2. Initialize an empty set to keep track of visited nodes
3. While the queue is not empty:
4. If the queue becomes empty and the goal state has not been found, return failure.
a.Dequeue a node from the queue
b. If the node is the goal state, return the solution
c.Otherwise, add the node to the set of visited nodes
d. Expand the node and add its children to the queue if they have not been
visited.
PROGRAM:
graph ={
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = [].
queue = []
def bfs (visited, graph, node):
visited. append(node)
queue. append(node)
while queue:
m = queue. pop(0)
print (m, end = " ")
for neighbour in graph[m]:
if neighbour notin visited:
visited. append(neighbour)
queue. append(neighbour)
print ("Following is the Breadth-First Search")
bfs (visited, graph, '5')
1
REG NO:
OUTPUT:
2
REG NO:
'5' : ['3','7'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
visited = set()
print (node)
visited.add(node)
3
REG NO:
OUTPUT:
RESULT:
Thus the python program to implement uninformed search algorithm DFS and BFS has been
successfully executed.
4
REG NO:
EX.NO:2
IMPLEMENTATION OF INFORMED SEARCH
DATE: ALGORITHMS
AIM:
The aim of implementing informed search algorithms such as A* and Memory-
bounded A* is to find a path or a solution from a given starting state to a goal state in a search
space, while also taking into account the estimated cost to the goal state.
A. A Star
ALGORITHM:
1. Start the process
2. Initialize empty set to keep track of visited nodes
3. While the priority queue is not empty:
4. Dequeue the node with the lowest priority from the
i. priority queue
ii. If the node is the goal state, return the solution
iii. Otherwise, add the node to the set of visited nodes
iv. Expand the node and compute the priority of its children as the sum of
the actual cost and the heuristic estimate
v. Add the children to the priority queue if they have not been visited or if
their priority is lower than their current priority in the priority queue
5. If the priority queue becomes empty and the goal state has not been found, return
failure
PROGRAM:
def aStarAlgo (start_node, stop_node):
open_set = set (start_node)
closed_set =set()
g = {}
parents = {}
g[start_node] = 0
parents[start_node] = start_node
n = None
for v in open_set:
5
REG NO:
n=v
pass
else:
open_set.add(m)
parents[m] = n
else:
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
return None
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
return path
6
REG NO:
open_set.remove(n)
closed_set.add(n)
return None
def get_neighbors(v):
if v in Graph_nodes:
return Graph_nodes[v]
else:
return None
def heuristic(n):
H_dist = {
'A': 11,
'B': 6,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
return H_dist[n]
Graph_nodes = {
7
REG NO:
aStarAlgo('A', 'J')
OUTPUT:
8
REG NO:
B. MEMORY -BOUNDED A*
ALGORITHM:
1. Start the process
2. Initialize empty set to keep track of visited nodes
3. While the priority queue is not empty:
4. Dequeue the node with the lowest priority from the
i. priority queue
ii. If the node is the goal state, return the solution
iii. Otherwise, add the node to the set of visited nodes
iv. Expand the node and compute the priority of its children as the sum of
the actual cost and the heuristic estimate
v. Add the children to the priority queue if they have not been visited or if
their priority is lower than their current priority in the priority queue
5. If the priority queue becomes empty and the goal state has not been found, return
failure.
PROGRAM:
import heapq
import math
def memory_bounded_A_star (start, goal, graph, euclidean_distance,
memory_limit):
start_node = (0, 0, start, [start])
priority_queue = [start_node]
node_count = 1
while priority_queue:
f, g, current, path = heapq. Heappop (priority_queue)
if current == goal:
return path
for next_node, cost in graph[current]:
g_new = g + cost
f_new = g_new + heuristic(next_node, goal)
new_node = (f_new, g_new, next_node, path + [next_node])
heapq. heappush(priority_queue, new_node)
node_count += 1
if node_count > memory_limit:
9
REG NO:
10
REG NO:
print(shortest_path)
OUTPUT:
RESULT:
Thus the python program to implement informed search algorithm A Star and Memory
bounded A* has been successfully executed.
11
REG NO:
EX.NO:3
IMPLEMENT NAÏVE BAYES MODELS
DATE:
AIM:
The aim of implementing Naïve Bayes models is to classify data into different
categories based on the probability of each category given the input data. Naïve Bayes
models are based on Bayes' theorem and assume that the features are conditionally
independent given the category.
ALGORITHM:
1. Start the program
2. Collect a dataset of labeled examples, where each example consists of a set of
features and a corresponding category label.
3. Compute the prior probability of each category by counting the number of examples
in each category and dividing by the total number of examples.
4. For each feature, compute the conditional probability of the feature given the category
by counting the number of times the feature appears in examples of that category and
dividing by the total number of examples in that category.
5. Store the prior probabilities and conditional probabilities in the model.
a.The algorithm for using a Naïve Bayes model to classify new data is as follows:
6. Given a new example with a set of features, compute the probability of each category
given the features using Bayes' theorem and the stored probabilities:
P(category | features) = P(category) * P(features | category) / P(features)
Where, P(category) is the prior probability of the category,
P(features | category) is the product of the conditional probabilities of the
features given the category, and
P(features) is a normalization constant that ensures that the probabilities sum to
one.
7. Classify the example as the category with the highest probability.
8. There are different variants of Naïve Bayes models, such as Gaussian Naïve Bayes,
which assumes that the features are normally distributed, and Multinomial Naïve
Bayes, which is used for discrete count data such as text data. The basic Naïve Bayes
algorithm described above is a good starting point for building and understanding
these more advanced models.
9. Stop the program
PROGRAM:
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
docs_train = ['hello world', 'goodbye world', 'hello goodbye']
labels_train = ['welcome', 'hai', 'hai-welcome']
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(docs_train)
12
REG NO:
model = MultinomialNB()
model.fit(X_train, labels_train)
docs_test = ['hello', 'world']
X_test = vectorizer.transform(docs_test)
labels_pred = model.predict(X_test)
print(labels_pred)
OUTPUT:
RESULT:
Thus the python program to implement naïve bayes model has been executed successfully.
13
REG NO:
EX.NO.4
IMPLEMENT BAYESIAN NETWORKS
DATE:
AIM:
The aim of implementing Bayesian Networks is to represent and reason about uncertain
knowledge in a probabilistic graphical model. Bayesian Networks are directed acyclic graphs
(DAGs) where each node represents a random variable, and the edges represent probabilistic
dependencies between the variables.
ALGORITHM:
1. Start the program
2. Define the set of variables to be modeled and their corresponding states or values.
3. Define the conditional probability tables (CPTs) for each variable given its parents in
the network.
4. Construct the graph by specifying the directed edges between the variables based on
their dependencies.
5. Verify that the network is acyclic.
6. The algorithm for using a Bayesian Network to make inferences or predictions is as
follows:
Given evidence about some of the variables in the network, compute the
posterior probability distribution over the remaining variables using the Bayes'
theorem and the CPTs:
P(X | E) = alpha * P(X) * ∏ P(X_i | parents(X_i))
where X is the set of variables of interest,
E is the set of observed evidence variables,
alpha is a normalization constant,
P(X) is the prior probability distribution over X, and
P(X_i | parents(X_i)) is the conditional probability of variable X_i given its
parents in the network.
7. Use the posterior distribution to make predictions or decisions based on the task at
hand.
8. Bayesian Networks are useful for many applications, including decision-making, risk
assessment, and diagnosis. They provide a flexible and intuitive framework for
modeling complex systems with uncertain knowledge and can be used to quantify and
reason about the uncertainty in the system.
9. Stop the program
14
REG NO:
PROGRAM:
from pomegranate import BayesianNetwork, DiscreteDistribution,
ConditionalProbabilityTable
model = BayesianNetwork('Example')
c_distribution = DiscreteDistribution({'C0': 0.1, 'C1': 0.9})
s_distribution = DiscreteDistribution({'S0': 0.6, 'S1': 0.4})
d_distribution = ConditionalProbabilityTable(
[['C0', 'S0', 'D0', 0.9],
['C0', 'S0', 'D1', 0.1],
['C0', 'S1', 'D0', 0.7],
['C0', 'S1', 'D1', 0.3],
['C1', 'S0', 'D0', 0.8],
['C1', 'S0', 'D1', 0.2],
['C1', 'S1', 'D0', 0.1],
['C1', 'S1', 'D1', 0.9]], [c_distribution, s_distribution])
model.add_states(c_distribution, s_distribution, d_distribution)
model.add_transition(c_distribution, d_distribution)
model.add_transition(s_distribution, d_distribution)
model.bake()
posterior = model.predict_proba({'D': 'D1'})
print(posterior[0].parameters[0])
OUTPUT:
RESULT:
Thus the python program to implement Bayesian networks has been excecuted successfully.
15
REG NO:
EX.NO:5
BUILD REGRESSION MODELS
DATE:
AIM:
The aim of building regression models is to predict a continuous output variable based on
one or more input variables. Regression models aim to find the relationship between the input
variables and the output variable through a mathematical function.
ALGORITHM:
1. Start the program
2. Collect a dataset consisting of input variables and corresponding output variables.
3. Split the dataset into training and testing sets to evaluate the model's performance.
4. Choose a regression algorithm to use. Examples include linear regression, polynomial
regression, ridge regression, and lasso regression.
5. Train the model on the training dataset using the chosen algorithm. This involves
fitting a function to the data that minimizes the error between the predicted output and
the actual output.
6. Evaluate the model's performance on the testing dataset by comparing the predicted
output to the actual output using performance metrics such as mean squared error,
mean absolute error, and R-squared.
7. Use the trained model to make predictions on new data.
8. Linear regression is one of the most commonly used regression algorithms.
9. The algorithm for linear regression involves fitting a linear function to the data that
minimizes the sum of squared errors between the predicted output and the actual
output. The linear function has the form:
i. y = b0 + b1*x1 + b2*x2 + ... + bn*xn
ii. where y is the predicted output, b0 is the intercept, b1 to bn are the
coefficients for each input variable x1 to xn, respectively.
10. Stop the program.
PROGRAM:
import matplotlib.pyplot as plt
from scipy import stats
x = [5,7,8,7,2,17,2,9,4,11,12,9,6]
y = [99,86,87,88,111,86,103,87,94,78,77,85,86]
slope, intercept, r, p, std_err = stats.linregress(x, y)
def myfunc(x):
return slope * x + intercept
mymodel = list(map(myfunc, x))
plt.scatter(x, y)
plt.plot(x, mymodel)
plt.show()
16
REG NO:
17
REG NO:
import pandas
from sklearn import linear_model
df = pandas.read_csv("data.csv")
X = df[['Weight', 'Volume']]
y = df['CO2']
regr = linear_model.LinearRegression()
regr.fit(X, y)
print(regr.coef_)
import pandas
from sklearn import linear_model
df = pandas.read_csv("data.csv")
X = df[['Weight', 'Volume']]
y = df['CO2']
regr = linear_model.LinearRegression()
regr.fit(X, y)
predictedCO2 = regr.predict([[3300, 1300]])
print(predictedCO2)
18
REG NO:
OUTPUT:
Linear regression:
Polynomial regression:
RESULT:
Thus the python program for implementing regression models has been
executed successfully.
19
REG NO:
EX.NO:6
BUILD DECISIONTREES AND RANDOMFORESTS
DATE:
AIM:
The aim of building decision trees and random forests is to create predictive
models that can classify or predict outcomes based on input features.
ALGORITHM:
1. Start the program
2. Data preparation: The data is split into training and testing sets, and any missing
values are imputed or removed.
3. Tree building: The decision tree algorithm is applied to the training data to create a
single decision tree or a set of decision trees (random forest).
4. Model evaluation: The model is evaluated on the testing data using performance
metrics such as accuracy, precision, recall, and F1 score.
5. Model tuning: The hyperparameters of the model are tuned to optimize its
performance.
6. Model deployment: The final model is deployed to make predictions on new data.
7. Stop the program
PROGRAM:
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf_dt = DecisionTreeClassifier(random_state=42)
clf_dt.fit(X_train, y_train)
clf_rf = RandomForestClassifier(n_estimators=100, random_state=42)
clf_rf.fit(X_train, y_train)
y_pred_dt = clf_dt.predict(X_test)
y_pred_rf = clf_rf.predict(X_test)
accuracy_dt = accuracy_score(y_test, y_pred_dt)
20
REG NO:
OUTPUT:
RESULT:
Thus the python program to build decision tree and random forest has been executed
sucessfuly.
21
REG NO:
EX.NO:7
BUILD SVM MODELS
DATE:
AIM:
The aim of building Support Vector Machine (SVM) models is to create a predictive
model that can classify data into different classes or predict a continuous value.
ALGORITHM:
1. Start the program
2. Data preparation: The data is split into training and testing sets, and any missing
values are imputed or removed.
3. Feature scaling: The input features are scaled to have zero mean and unit variance to
avoid bias towards features with larger values.
4. Kernel selection: A suitable kernel function is chosen based on the characteristics of
the data.
5. Hyperparameter tuning: The hyperparameters of the SVM model, such as the
regularization parameter and the kernel coefficient, are tuned using cross-validation to
find the best combination of parameters that optimize the model's performance.
6. Model training: The SVM model is trained on the training data using the chosen
kernel function and hyperparameters.
7. Model evaluation: The performance of the model is evaluated on the testing data
using performance metrics such as accuracy, precision, recall, and F1 score.
8. Model deployment: The final model is deployed to make predictions on new data
9. Stop the program
PROGRAM:
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn.metrics import accuracy_score
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = svm.SVC(kernel='linear')
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("SVM Accuracy:", accuracy)
22
REG NO:
OUTPUT:
RESULT:
Thus the python program to build SVM models has been executed successfully.
23
REG NO:
EX.NO:8
IMPLEMENT ENSEMBLING TECHNIQUES
DATE:
AIM:
The aim of implementing ensembling techniques is to improve the accuracy and
robustness of predictive models by combining the predictions of multiple models.
ALGORITHM:
1. Start the program
2. Data preparation: The data is split into training and testing sets, and any missing
values are imputed or removed.
3. Model selection: A set of base models are selected, which can be different types of
models with different strengths and weaknesses.
4. 4:Ensembling technique selection: A suitable ensembling technique is chosen based
on the characteristics of the data and the performance of the base models.
5. Base model training: The base models are trained on the training data.
6. Ensemble model training: The ensemble model is trained using the outputs of the base
models as input features.
7. Model evaluation: The performance of the ensemble model is evaluated on the testing
data using performance metrics such as accuracy, precision, recall, and F1 score.
8. Model tuning: The hyperparameters of the models are tuned to optimize the
performance of the ensemble model.
9. Model deployment: The final ensemble model is deployed to make predictions on
new data.
10. Stop the program
PROGRAM:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import VotingClassifier, BaggingClassifier, RandomForestClassifier
from sklearn.metrics import accuracy_score
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf_dt = DecisionTreeClassifier(random_state=42)
clf_rf = RandomForestClassifier(n_estimators=100, random_state=42)
clf_voting = VotingClassifier(estimators=[('dt', clf_dt), ('rf', clf_rf)], voting='hard')
clf_bagging = BaggingClassifier(base_estimator=clf_dt, n_estimators=10, random_state=42)
clf_voting.fit(X_train, y_train)
clf_bagging.fit(X_train, y_train)
y_pred_voting = clf_voting.predict(X_test)
y_pred_bagging = clf_bagging.predict(X_test)
accuracy_voting = accuracy_score(y_test, y_pred_voting)
24
REG NO:
OUTPUT:
RESULT:
Thus the python program to implement ensemble techniques has been executed successfully.
25
REG NO:
EX.NO:9
IMPLEMENT CLUSTERING ALGORITHMS
DATE:
AIM:
The aim of implementing clustering algorithms is to group similar data points
together based on their features or attributes, without any prior knowledge of the class labels.
ALGORITHM:
1. Start the program
2. Data preparation: The data is preprocessed and any missing values are imputed or
removed.
3. Feature scaling: The input features are scaled to have zero mean and unit variance to
avoid bias towards features with larger values.
4. Cluster selection: A suitable clustering algorithm is chosen based on the
characteristics of the data.
5. Hyperparameter tuning: The hyperparameters of the clustering algorithm, such as the
number of clusters for k-means or the distance threshold for density-based clustering,
are tuned to optimize the clustering performance.
6. Model training: The clustering algorithm is applied to the data to create clusters.
7. Cluster evaluation: The quality of the clustering is evaluated using performance
metrics such as silhouette score or Davies-Bouldin index.
8. Visualization: The clusters are visualized using dimensionality reduction techniques
such as PCA or t-SNE
9. Stop the program
PROGRAM:
from sklearn.datasets import load_iris
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.metrics import silhouette_score
iris = load_iris()
X, y = iris.data, iris.target
kmeans = KMeans(n_clusters=3, random_state=42)
kmeans.fit(X)
agg = AgglomerativeClustering(n_clusters=3)
agg.fit(X)
dbscan = DBSCAN(eps=0.5, min_samples=5)
dbscan.fit(X)
silhouette_kmeans = silhouette_score(X, kmeans.labels_)
silhouette_agg = silhouette_score(X, agg.labels_)
26
REG NO:
OUTPUT:
RESULT:
Thus the python program to implement clustering algorithm has been executed successfully.
27
REG NO:
EX.NO:10
IMPLEMENT EMFORBAYESIAN NETWORKS
DATE:
AIM:
The aim of implementing EM (Expectation-Maximization) algorithm for Bayesian
networks is to estimate the parameters of a Bayesian network model from incomplete or
missing data.
ALGORITHM:
1. Start the program
2. Define the Bayesian network structure: The structure of the Bayesian network is
defined based on prior knowledge of the relationships between the variables.
3. Initialize the parameters: The parameters of the Bayesian network, such as the
conditional probabilities, are initialized randomly or using prior knowledge.
4. Expectation step: In the E-step, the missing values are estimated using the current
parameters of the model. This is done using the posterior distribution of the missing
data given the observed data and the current parameters of the model. This
distribution is computed using the Bayes' rule.
5. Maximization step: In the M-step, the parameters of the Bayesian network are
updated based on the estimated missing data from the E-step. This is done using the
maximum likelihood estimation (MLE) or maximum a posteriori (MAP) estimation.
6. Repeat steps 3 and 4 until convergence: The E and M steps are repeated until the log-
likelihood of the data converges or a maximum number of iterations is reached.
7. Evaluate the model: The performance of the Bayesian network model is evaluated on
a validation set or using cross-validation.
8. Stop the program.
PROGRAM:
import numpy as np
network = {
'A': {'parents': [], 'prob': 0.4},
'B': {'parents': ['A'], 'prob': np.array([[0.7, 0.3], [0.4, 0.6]])}
}
data = np.array([[0, 1], [1, 1], [1, 0], [0, 0]])
prob_A = 0.5
prob_B_given_A = np.array([[0.6, 0.4], [0.3, 0.7]])
for i in range(3):
count_A = np.sum(data[:, 0])
28
REG NO:
29
REG NO:
OUTPUT:
RESULT:
Thus the python program to implement Em for Bayesian networks has been executed
successfully.
30
REG NO:
EX.NO:11
BUILD SIMPLE NN MODELS
DATE:
AIM:
The aim of building simple neural network (NN) models is to classify or predict
target variables based on input data. Neural networks are a type of machine learning model
that consists of layers of interconnected nodes, or neurons, that can learn to recognize
patterns in the data.
ALGORITHM:
1. Start the program
2. Data preparation: The input data is pre processed and any missing values are imputed
or removed. The target variable is encoded as a numerical value or one-hot encoding.
3. Split data into training and validation sets: The data is split into a training set and a
validation set. The training set is used to train the NN model, while the validation set
is used to evaluate the performance of the model.
4. Define the NN architecture: The architecture of the NN is defined by specifying the
number of input neurons, hidden layers, and output neurons. The activation function
for each layer is also specified.
5. Initialize the weights and biases: The weights and biases of the NN are initialized
randomly or using a pre-defined method such as Xavier or He initialization.
6. Forward propagation: The input data is fed into the NN, and the outputs of each layer
are computed using the weights and biases.
7. Compute loss: The loss function, which measures the difference between the
predicted and actual target values, is computed.
8. Back propagation: The gradients of the loss function with respect to the weights and
biases are computed using the chain rule.
9. Update the weights and biases: The weights and biases are updated using an
optimization algorithm such as gradient descent or Adam.
10. Repeat steps 5-8 until convergence: The forward propagation, loss computation, back
propagation, and weight update steps are repeated until the NN converges or a
maximum number of iterations is reached.
11. Evaluate the model: The performance of the NN model is evaluated on the validation
set or using cross-validation. Common performance metrics include accuracy,
precision, recall, and F1 score.
12. Stop the program
31
REG NO:
PROGRAM:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
X_train = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y_train = np.array([0, 1, 1, 0])
model = Sequential()
model.add(Dense(units=2, input_dim=2, activation='relu'))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=1000, verbose=0)
X_test = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y_pred = model.predict(X_test)
print(y_pred)
OUTPUT:
RESULT:
Thus the python program to build simple NN models has been executed successfully.
32
REG NO:
EX.NO:12
BUILD DEEP LEARNING NN MODELS
DATE:
AIM:
The aim of building deep learning neural network (NN) models is to solve
complex tasks such as image recognition, natural language processing, and speech
recognition. Deep learning NN models are composed of multiple layers of interconnected
neurons that can learn hierarchical representations of the input data.
ALGORITHM:
1. Start the program
2. Data preparation: The input data is preprocessed and any missing values are imputed
or removed. The target variable is encoded as a numerical value or one-hot encoding.
3. Split data into training and validation sets: The data is split into a training set and a
validation set. The training set is used to train the deep learning model, while the
validation set is used to evaluate the performance of the model.
4. Define the deep learning NN architecture: The architecture of the deep learning NN is
defined by specifying the number of layers, the type of layers, the activation function
for each layer, and the number of neurons in each layer.
5. Initialize the weights and biases: The weights and biases of the deep learning NN are
initialized randomly or using a pre-defined method such as Xavier or He initialization.
6. Forward propagation: The input data is fed into the deep learning NN, and the outputs
of each layer are computed using the weights and biases.
7. Compute loss: The loss function, which measures the difference between the
predicted and actual target values, is computed.
8. Back propagation: The gradients of the loss function with respect to the weights and
biases are computed using the chain rule.
9. Update the weights and biases: The weights and biases are updated using an
optimization algorithm such as gradient descent or Adam.
10. Repeat steps 5-8 until convergence: The forward propagation, loss computation, back
propagation, and weight update steps are repeated until the deep learning NN
converges or a maximum number of iterations is reached.
11. Evaluate the model: The performance of the deep learning NN model is evaluated on
the validation set or using cross-validation. Common performance metrics include
accuracy, precision, recall, and F1 score.
12. Fine-tune the model: The deep learning NN model can be fine-tuned by adjusting the
hyperparameters such as the learning rate, number of epochs, and batch size.
13. Predict new data: Once the deep learning NN model is trained and fine-tuned, it can
be used to predict new data.
14. Stop the program
33
REG NO:
PROGRAM:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
X_train = np.random.rand(100, 10)
y_train = np.random.randint(0, 2, size=100)
X_test = np.random.rand(20, 10)
y_test = np.random.randint(0, 2, size=20)
model = Sequential()
model.add(Dense(units=32, activation='relu', input_dim=10))
model.add(Dense(units=16, activation='relu'))
model.add(Dense(units=1, activation='sigmoid')) model.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=50, batch_size=10)
score = model.evaluate(X_test, y_test, batch_size=10)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
OUTPUT:
RESULT:
Thus the python program to build deep learning NN models has been executed successf
EX NO:13
DEEP REINFORCEMENT LEARNING
34
REG NO:
DATE:
AIM:
The aim of implementing deep reinforcement learning with Python code is to train
an agent to take actions in an environment to maximize a reward signal over time. The agent
learns by trial-and-error, exploring the environment and adjusting its behavior based on the
rewards it receives. Deep reinforcement learning is a type of reinforcement learning that uses
deep neural networks to approximate the value function or policy of the agent.
ALGORITHM:
Step 1: Define the environment: The environment consists of the state of the world and
the set of actions the agent can take in that state. The state and action space can be
continuous or discrete.
Step 2: Define the reward function: The reward function provides a signal to the agent
about the desirability of the actions it takes in a given state.
Step 3: Initialize the neural network: The neural network approximates the value function
or policy of the agent.
Step 4: Set the hyperparameters: The hyperparameters control the training process, such
as the learning rate, discount factor, and exploration rate.
Step 5: Loop through episodes: Each episode consists of the agent taking actions in the
environment until a terminal state is reached. For example, in a game, the terminal state
could be winning or losing.
Step 6: Initialize the state: The agent starts in an initial state of the environment.
Step 7: Loop through time steps: At each time step, the agent selects an action based on
its policy, which is derived from the neural network's output.
Step 8: Take the action: The agent takes the selected action in the environment and
observes the new state and the reward.
Step 9: Store the experience: The agent stores the experience, consisting of the current
state, action, reward, and next state.
Step 10: Update the neural network: The neural network is updated using the experience
stored in memory.
35
REG NO:
PROGRAM:
import gym
import numpy as np
env = gym.make('CartPole-v1')
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
hidden_size = 24
model = Sequential([
Dense(hidden_size, activation='relu'),
Dense(action_size, activation='softmax')
])
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001))
def choose_action(state):
action_probs = model.predict(state)[0]
return action
num_episodes = 1000
max_steps_per_episode = 500
36
REG NO:
discount_factor = 0.99
replay_buffer = []
state = env.reset()
episode_reward = 0
action = choose_action(state)
episode_reward += reward
state = next_state
if done:
break
target = reward
if not done:
target_f[0][action] = target
37
REG NO:
OUTPUT:
RESULT:
Thus the python program to implement deep reinforcement learning has been execute
successfully.
38
REG NO:
EX NO:14
EXPLAINABLE AI(XAI) TECHNIQUE
DATE:
AIM:
The aim of implementing Explainable AI (XAI) techniques is to create machine
learning models that can provide understandable and interpretable results, making it possible
for humans to understand how and why the model makes certain decisions. The goal is to
increase transparency and trust in machine learning systems, especially in high-stakes
applications such as healthcare, finance, and criminal justice.
ALGORITHM:
Step 1: Select the machine learning model: Choose a machine learning model that is suitable
for the problem you want to solve
Step 2: Identify the input features: Identify the input features used by the model to make its
predictions.
Step 3: Identify the output: Identify the output of the model, such as a binary classification,
multi-class classification, or regression.
Step 4: Choose an XAI technique: Choose an XAI technique that is appropriate for the type
of model and the desired level of explanation.
Step 5: Generate explanations: Use the chosen XAI technique to generate explanations for the
model's predictions. This can include feature importance, decision rules, visualizations, or
natural language explanations.
Step 6: Evaluate the explanations: Evaluate the quality and usefulness of the explanations
using metrics such as accuracy, completeness, consistency, and human interpretability.
Step 7: Improve the model and explanations: Use the insights gained from the explanations to
improve the machine learning model and the XAI technique.
Step 8: Deploy the model and explanations: Deploy the machine learning model and the XAI
technique in the intended application, ensuring that the explanations are easily accessible to
users.
39
REG NO:
PROGRAM:
import lime
import lime.lime_tabular
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
rf = RandomForestClassifier(n_estimators=100, random_state=0)
rf.fit(X_train, y_train)
explainer = lime.lime_tabular.LimeTabularExplainer(X_train.values,
feature_names=X_train.columns,
class_names=['benign', 'malignant'],
discretize_continuous=True)
for i in range(2):
x = X_test.iloc[[i]]
explanation = explainer.explain_instance(x.values[0], rf.predict_proba)
print('Instance', i+1)
print('Predicted class:', rf.predict(x))
40
REG NO:
OUTPUT:
RESULT:
Thus the python program to explainable AI(XAI) Techique has been executed successfully.
41