0% found this document useful (0 votes)
11 views

AI &MI LAB

AI &MI LAB

Uploaded by

Yobu D Job
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

AI &MI LAB

AI &MI LAB

Uploaded by

Yobu D Job
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

KURINJI COLLEGE OF ENGINEERING AND

TECHNOLOGY – MANAPPARAI – 621 307

CS3491- ARTIFICIAL INTELLIGENCE AND MACHINE


LEARNING

LAB MANUAL

ACADEMIC YEAR 2022-23 (EVEN SEMESTER)

Regulation : 2021

Branch : B.E– CSE

Year&Semester : II Year / IV Semester


LIST OF EXPERIMENTS:

1. Implementation of Uninformed search algorithms (BFS, DFS)

2. Implementation of Informed search algorithms (A*, memory-bounded A*)

3. Implement naïve Bayes models

4. Implement Bayesian Networks

5. Build Regression models

6. Build decision trees and random forests

7. Build SVM models

8. Implement ensembling techniques

9. Implement clustering algorithms

10. Implement EM for Bayesian networks

11. Build simple NN models

12. Build deep learning NN models


CS3491- AIML

Course Outcomes

CO1: Explain AI application and problem solving agent.

CO2: Construct the search algorithm for problem solving.

CO3: Apply different techniques for probabilistic reasoning.

CO4: Experiment the supervised learning models.

CO5: Examine the ensemble techniques and unsupervised learning .

CO6: Describe the Deep learning neural network model.


INDEX

EX.NO. DATE EXPERIMENT NAME MARK SIGN.

1a Implementation of Uninformed search


algorithms (BREADTH FIRST SEARCH)
1b Uninformed Search Strategies
(DEPTH FIRST SEARCH)
2a Informed Search Strategies
(A* SEARCH)
2b Informed search algorithms memory-bounded
A*
3a Implement naive Bayes models
(Gaussian Naive Bayes)
3b Implement naive Bayes models
(Multinomial Naive Bayes)
4 Implement Bayesian Networks

5 Build Regression Models

6a Build decision trees

6b Build random forests

7 Build SVM models

8 Implement ensembling techniques

9a Implement clustering algorithms


(Hierarchical clustering)
9b Implement clustering algorithms
(Density-based clustering)
10 Implement EM for Bayesian networks

11 Build simple NN models

12 Build deep learning NN models


Ex. No: 1A Implementation of Uninformed search algorithms
(BREADTH FIRST SEARCH)

AIM

To write a python program to implement breadth first search.

Algorithm:

1. Create an empty queue (for BFS) add the initial state to it.

2. Create an empty set to store visited states.

3. While the queue is not empty:

 Remove the first state from the queue .


 If the state is the goal state, return the path from the initial state to the current
state.
 Otherwise, generate all possible actions from the current state.
 For each action, generate the resulting state and check if it has been visited
before.
 If it has not been visited, add it to the queue and mark it as visited.
4. If the queue is empty and no goal state has been found, return failure.
GRAPH
PROGRAM:
OUTPUT

RESULT

Thus the python program to implement Breadth First Search was done successfully.
EX.NO : 1B UNINFORMED SEARCH STRATEGIES

(DEPTH FIRST SEARCH)

AIM

To write a python program to implement Depth first search.

Algorithm:

1. Create an empty stack (for DFS) and add the initial state to it.

2. Create an empty set to store visited states.

3. While the stack is not empty:

 Remove the first state from the last state from the stack.
 If the state is the goal state, return the path from the initial state to the current
state.
 Otherwise, generate all possible actions from the current state.
 For each action, generate the resulting state and check if it has been visited
before.
 If it has not been visited, add it to the stack and mark it as visited.
4. If the stack is empty and no goal state has been found, return failure.
GRAPH
PROGRAM
OUTPUT

RESULT

Thus the python program to implement Depth First Search was done successfully.
EX.NO : 2A INFORMED SEARCH STRATEGIES

(A* SEARCH)

AIM

To write a python program to implement A* Search.

Algorithm:

1. Create an open set and a closed set, both initially empty.

2. Add the initial state to the open set with a cost of 0 and an estimated total cost (f-score) of the
heuristic value of the initial state.

3. While the open set is not empty:

 Choose the state with the lowest f-score from the open set.
 If this state is the goal state, return the path from the initial state to this state.
 Generate all possible actions from the current state.
 For each action, generate the resulting state and compute the cost to get to that
state by adding the cost of the current state plus the cost of the action.

 If the resulting state is not in the closed set or the new cost to get there is less than
the old cost, update its cost and estimated total cost in the open set and add it to
the open set.
 Add the current state to the closed set.
4.If the open set is empty and no goal state has been found, return failure.
GRAPH
PROGRAM
OUTPUT

RESULT

Thus the python program to implement A* Search was done successfully.


Ex. No:2b Informed search algorithms memory-bounded A*

Aim:
Aim is to write a program in python to solve problems by using Implementation of

Informed search algorithms memory-bounded A* .

Algorithm:

1. Create an open set and a closed set, both initially empty.

2. Add the initial state to the open set with a cost of 0 and an estimated total cost

(f-score) of the heuristic value of the initial state.

3. While the open set is not empty:

a. Choose the state with the lowest f-score from the open set.

b. If this state is the goal state, return the path from the initial state to this state.

c. Generate all possible actions from the current state.

d. For each action, generate the resulting state and compute the cost to get to that state by
adding the cost of the current state plus the cost of the action.

e. If the resulting state is not in the closed set or the new cost to get there is less than the old
cost, update its cost and estimated total cost in the open set and add it to the open set.

f. Add the current state to the closed set.

g. If the size of the closed set plus the open set exceeds the maximum memory usage, remove
the state with the highest estimated total cost from the closed set and add it back to the open
set.

4. If the open set is empty and no goal state has been found, return failure.
Program :

from queue import PriorityQueue

import sys

def memory_bounded_a_star(start_node, goal_node, max_memory):

frontier = PriorityQueue()

frontier.put((0, start_node))

explored = set()

total_cost = {start_node: 0}

while not frontier.empty():

# Check if memory limit has been reached

if sys.getsizeof(explored) > max_memory:

return None

_, current_node = frontier.get()

if current_node == goal_node:

path = []

while current_node != start_node:

path.append(current_node)

current_node = current_node.parent

path.append(start_node)

path.reverse()

return path

explored.add(current_node)

for child_node, cost in current_node.children():


if child_node in explored:

continue

new_cost = total_cost[current_node] + cost

if child_node not in total_cost or new_cost < total_cost[child_node]:

total_cost[child_node] = new_cost

priority = new_cost + child_node.heuristic(goal_node)

frontier.put((priority, child_node))

return None

class Node:

def __init__(self, state, parent=None):

self.state = state

self.parent = parent

self.cost = 1

def __eq__(self, other):

return self.state == other.state

def __hash__(self):

return hash(self.state)

def heuristic(self, goal):

# Simple heuristic for demonstration purposes

return abs(self.state - goal.state)

def children(self):

# Generates all possible children of a given node

children = []

for action in [-1, 1]:

child_state = self.state + action


child_node = Node(child_state, self)

children.append((child_node, child_node.cost))

return children

# Example usage

start_node = Node(1)

goal_node = Node(10)

path = memory_bounded_a_star(start_node, goal_node, max_memory=1000000)

if path is None:

print("Memory limit exceeded.")

else:

print([node.state for node in path])

OUTPUT :

Path: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

RESULT :-
Thus the python program has been written and executed successfully.
Ex. No:3A Implement naive Bayes models
(Gaussian Naive Bayes)
Aim:
Aim is to write a program in python to solve problems by using Implement naive Bayes

model (Gaussian Naive Bayes) .

Algorithm:

Input:

• Training dataset with features X and corresponding labels Y .

• Test dataset with features X_test .

Output:

• Predicted labels for test dataset Y_pred

Steps:

1. Calculate the prior probabilities of each class in the training dataset, i.e., P(Y = c), where c is the
class label.

2. Calculate the mean and variance of each feature for each class in the training dataset.

3. For each test instance in X_test, calculate the posterior probability of each class c, i.e., P(Y = c

| X = x_test), using the Gaussian probability density function: P(Y = c | X = x_test) = (1 /

(sqrt(2*pi)*sigma_c)) * exp(-((x_test - mu_c)^2) / (2 * sigma_c^2)) where mu_c and sigma_c

are the mean and variance of feature values for class c, respectively.

4. For each test instance in X_test, assign the class label with the highest posterior probability as

the predicted label Y_pred.

5. Return Y_pred as the output.


Program :

from sklearn.naive_bayes import GaussianNB

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

# Load iris dataset

data = load_iris()

X, y = data.data, data.target

# Split dataset into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train Gaussian Naive Bayes model

gnb = GaussianNB()

gnb.fit(X_train, y_train)

# Predict labels for test set

y_pred = gnb.predict(X_test)

# Calculate accuracy of predictions

accuracy = accuracy_score(y_test, y_pred)

# Print results

print("Accuracy:", accuracy)
Output :

Accuracy: 1.0

RESULT :-

Thus the python program has been written and executed successfully.
Ex. No:3b Implement naive Bayes models
(Multinomial Naive Bayes)

Aim:
Aim is to write a program in python to solve problems by using Implement naive Bayes

model (Multinomial Naive Bayes).

Algorithm:

1. Convert the training dataset into a frequency table where each row represents a

document, and each column represents a word in the vocabulary. The values in the table

represent the frequency of each word in each document.

2. Calculate the prior probabilities of each class label by dividing the number of documents

in each class by the total number of documents.

3. Calculate the conditional probabilities of each word given each class label. This involves

calculating the frequency of each word in each class and dividing it by the total number

of words in that class.

4. For each document in the test dataset, calculate the posterior probability of each class

label using the Naive Bayes formula:


5. P(class_label | document) = P(class_label) * P(word1 | class_label) * P(word2 |

class_label) * ... * P(wordn | class_label)

6. where word1, word2, ..., wordn are the words in the document and P(word | class_label)

is the conditional probability of that word given the class label.

7. Predict the class label with the highest posterior probability for each document in the test

dataset.

8. Return the predicted class labels for the test dataset.


Program :

from sklearn.naive_bayes import MultinomialNB

from sklearn.feature_extraction.text import CountVectorizer

# Sample training data

train_data = ["this is a positive example", "this is a negative example", "another negative

example", "yet another negative example"]

train_labels = ["positive", "negative", "negative", "negative"]

# Sample test data

test_data = ["this is a test", "this example is negative"]

# Create a CountVectorizer to convert the text data into numerical features

vectorizer = CountVectorizer()

# Fit the vectorizer to the training data and transform the data

train_features = vectorizer.fit_transform(train_data)

# Create a Multinomial Naive Bayes classifier and train it on the training data

clf = MultinomialNB()

clf.fit(train_features, train_labels)

# Transform the test data using the same vectorizer

test_features = vectorizer.transform(test_data)

# Use the trained classifier to predict the class labels for the test data

predicted_labels = clf.predict(test_features)

# Print the predicted class labels for the test data

print(predicted_labels)
Output :

['negative' 'negative']

RESULT :-

Thus the python program has been written and executed


successfully.
Ex. No: 4 Implement Bayesian Networks

AIM:

To write a program in python to solve problems by using Implement Bayesian

Networks.

ALGORITHM:

1. Import numpy and scipy.stats package

2. Set node1, node2, and node3 with respective details

3. Define a function to compute the joint probability of a set of nodes given their parents

4. Define a function to compute the probability of a node given its parents

5. Define a function to sample from a node given its parents

6. Sample from the network

7. Compute the joint probability

8. Print the joint probability


rogram:

import numpy as np
from scipy.stats import norm
node1 = ("Node1", [], {(): 0.5})
node2 = ("Node2", ["Node1"], {(True,): 0.7, (False,): 0.2})
node3 = ("Node3", ["Node1"], {(True,): 0.1, (False,): 0.8})

# Define a function to compute the joint probability of a set of nodes given thei
r parents
def compute_joint_prob(nodes):
joint_prob = 1.0
for node in nodes:
name, parents, cpt = node
parents_val_tuple = tuple(node_val[parent] for parent in parents)
prob = cpt[parents_val_tuple]
joint_prob *= prob
return joint_prob

# Define a function to compute the probability of a node given its parents


def compute_cond_prob(node, node_val):
name, parents, cpt = node
parents_val_tuple = tuple(node_val[parent] for parent in parents)
prob = cpt[parents_val_tuple]
return prob
# Define a function to sample from a node given its parents
def sample_node(node, node_val):
prob = compute_cond_prob(node, node_val)
sample = np.random.binomial(1, prob)
return sample

# Sample from the network


node_val = {}
for node in [node1, node2, node3]:
sample = sample_node(node, node_val)
node_val[node[0]] = sample

# Compute the joint probability


joint_prob = compute_joint_prob([node1, node2, node3])
print("The joint probability is", joint_prob)
Output

The joint probability is 0.034999999999999996

RESULT

Thus the python program to implement Bayesian networks was done successfully.
Ex. No: 5 Build Regression Models

AIM:
To write a python program to solve Build Regression Models .

ALGORITHM:

1. Load the data from a CSV file using pandas.

2. Split the data into features and target variables.

3. Split the data into training and testing sets using the train_test_split function from scikit

learn.

4. Train a linear regression model using the training data by creating an instance of the

LinearRegression class and calling its fit method with the training data.

5. Evaluate the performance of the model using mean squared error on both the training and

testing data. Print the results to the console.


Program :

import numpy as np

import pandas as pd

from sklearn.linear_model import LinearRegression

from sklearn.model_selection import train_test_split

from sklearn.metrics import mean_squared_error

# Load data

data = pd.read_csv('data.csv')

# Split data into features and target

X = data.drop('target', axis=1)

y = data['target']

# Split data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train linear regression model

reg = LinearRegression()

reg.fit(X_train, y_train)

# Evaluate model

train_pred = reg.predict(X_train)

test_pred = reg.predict(X_test)

print('Train MSE:', mean_squared_error(y_train, train_pred))

print('Test MSE:', mean_squared_error(y_test, test_pred))


Output :

Train MSE: 0.019218

Test MSE: 0.022715

RESULT :-

Thus the python program has been written and executed


successfully.
Ex. No: 6A Build decision trees

AIM:

To write a python program to solve Build decision trees.

ALGORITHM:

Step 1: Collect and preprocess data

 Collect the dataset and prepare it for analysis


 Handle missing data and outliers
 Convert categorical variables to numerical values (if needed)

Step 2: Determine the root node

 Choose a feature that provides the most information gain (reduces uncertainty)
 Split the dataset based on the selected feature

Step 3: Build the tree recursively

 For each subset of the data, repeat steps 1 and 2


 Continue until each subset is either pure (only one class label) or too small to split
further

Step 4: Prune the tree (optional)

 Remove branches that do not improve the model's accuracy


 Prevent overfitting by reducing the complexity of the tree

Step 5: Evaluate the performance of the tree

 Use a separate validation set to estimate the accuracy of the model


 Adjust hyperparameters to optimize the performance
Program :

import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LinearRegression

from sklearn.metrics import r2_score

# Load the dataset

dataset = pd.read_csv('housing.csv')

# Split the dataset into training and testing sets

X = dataset.drop('MEDV', axis=1)

y = dataset['MEDV']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train the model

model = LinearRegression()

model.fit(X_train, y_train)

# Make predictions on the test set

y_pred = model.predict(X_test)

# Evaluate the model

r2 = r2_score(y_test, y_pred)

print('Linear Regression R^2:', r2)


Output :

Decision Tree Accuracy: 1.0

RESULT :-

Thus the python program has been written and executed successfully.
Ex. No: 6B Build random forests

AIM:
To write a python program to solve Build random forests .

ALGORITHM:

Step 1: Collect and preprocess data

 Collect the dataset and prepare it for analysis


 Handle missing data and outliers
 Convert categorical variables to numerical values (if needed)

Step 2: Randomly select features

 Choose a number of features to use at each split


 Randomly select that many features from the dataset

Step 3: Build decision trees on subsets of the data

 For each subset of the data, repeat steps 1 and 2


 Build a decision tree using the selected features and split criteria

Step 4: Aggregate the predictions of the trees

 For a new data point, pass it through each tree in the forest
 Aggregate the predictions of all trees (e.g., by majority vote)

Step 5: Evaluate the performance of the forest

 Use a separate validation set to estimate the accuracy of the model


 Adjust hyperparameters to optimize the performance
Program :

import pandas as pd

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

from sklearn.ensemble import RandomForestClassifier

from sklearn.metrics import accuracy_score

# Load the dataset

iris = load_iris()

X = iris.data

y = iris.target

# Split the dataset into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train the model

model = RandomForestClassifier(n_estimators=100, random_state=42)

model.fit(X_train, y_train)

# Make predictions on the test set

y_pred = model.predict(X_test)

# Evaluate the model

accuracy = accuracy_score(y_test, y_pred)

print('Random Forest Accuracy:', accuracy)


Output :

Random Forest Accuracy: 1.0

RESULT :-

Thus the python program has been written and executed successfully.
Ex. No: 7 Build SVM models

Algorithm Steps:

1. Import the required libraries numpy and sklearn's svm.


2. Create an array of data points X and an array of labels y.
3. Build a support vector machine (SVM) model using the linear kernel.
4. Train the model with the given data points X and labels y.
5. Predict the label of an unseen data point [2, 2].
6. Print the predicted label
Program:
# Importing required libraries
import numpy as np
from sklearn import svm
# Data points
X = np.array([[1, 0], [0, 1], [0, -1], [-1, 0]])
# Labels
y = np.array([1, 1, 2, 2])
# Building a Support Vector Machine (SVM)using linear kernel
model = svm.SVC(kernel='linear')
# Training the model
model.fit(X, y)
# Prediction on the unseen data
print(model.predict([[2, 2]]))

Output

[1]

RESULT :-

Thus the python program has been written and executed successfully.
Ex. No: 8 Implement ensembling techniques

AIM:
To write a python program to solve Implement ensembling techniques

ALGORITHM:
1. Load the breast cancer dataset and split the data into training and testing sets using

train_test_split() function.

2. Train 10 random forest models using bagging by randomly selecting 50% of the training

data for each model, and fit a random forest classifier with 100 trees to the selected data.

3. Test each model on the testing set and calculate the accuracy of each model using

accuracy_score() function.

4. Combine the predictions of the 10 models by taking the average of the predicted

probabilities for each class, round the predicted probabilities to the nearest integer, and

calculate the accuracy of the ensemble model using accuracy_score() function.

5. Print the accuracy of each individual model and the ensemble model.
Program :
from sklearn.datasets import load_breast_cancer

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

data = load_breast_cancer()

X = data.data

y = data.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

models = []

for i in range(10):

X_bag, _, y_bag, _ = train_test_split(X_train, y_train, test_size=0.5)

model = RandomForestClassifier(n_estimators=100, random_state=42)

model.fit(X_bag, y_bag)

y_pred = model.predict(X_test)

acc = accuracy_score(y_test, y_pred)

print(f"Model {i+1}: {acc}")

models.append(model)

y_preds = []

for model in models:

y_pred = model.predict(X_test)

y_preds.append(y_pred)

y_ensemble = sum(y_preds) / len(y_preds)

y_ensemble = [int(round(y)) for y in y_ensemble]

acc_ensemble = accuracy_score(y_test, y_ensemble)

print(f"Ensemble: {acc_ensemble}")
Output :
Model 1: 0.9649122807017544

Model 2: 0.9473684210526315

Model 3: 0.956140350877193

Model 4: 0.9649122807017544

Model 5: 0.956140350877193

Model 6: 0.9649122807017544

Model 7: 0.956140350877193

Model 8: 0.956140350877193

Model 9: 0.956140350877193

Model 10: 0.9736842105263158

Ensemble: 0.956140350877193

RESULT :-
Thus the python program has been written and executed successfully.
Ex. No: 9A Implement clustering algorithms

(Hierarchical clustering)

AIM:
To write a python program to solve Implement clustering algorithms (Hierarchical

clustering)

ALGORITHM:
1. Begin with a dataset containing n data points.

2. Calculate the pairwise distance between all data points.

3. Create n clusters, one for each data point.

4. Find the closest pair of clusters based on the pairwise distance between their data points.

5. Merge the two closest clusters into a new cluster.

6. Update the pairwise distance matrix to reflect the distance between the new cluster and

the remaining clusters.

7. Repeat steps 4-6 until all data points are in a single cluster.
Program:

import numpy as np

from scipy.cluster.hierarchy import dendrogram, linkage

import matplotlib.pyplot as plt

# Generate sample data

X = np.array([[1,2], [1,4], [1,0], [4,2], [4,4], [4,0]])

# Perform hierarchical clustering

Z = linkage(X, 'ward')

# Plot dendrogram

plt.figure(figsize=(10, 5))

dendrogram(Z)

plt.show()
Output :

RESULT :-
Thus the python program has been written and executed Successfully
Ex. No: 9b Implement clustering algorithms

(Density-based clustering)

AIM:
To write a python program to solve Implement clustering algorithms (Density-based

clustering) .

ALGORITHM:
1. Choose an appropriate distance metric (e.g., Euclidean distance) to measure the

similarity between data points.

2. Choose the value of the radius eps around each data point that will be considered when

identifying dense regions. This value determines the sensitivity of the algorithm to noise

and outliers.

3. Choose the minimum number of points min_samples that must be found within a

radius of eps around a data point for it to be considered a core point. Points with fewer

neighbors are considered border points, while those with no neighbors are considered

noise points.

4. Randomly choose an unvisited data point p from the dataset.

5. Determine whether p is a core point, border point, or noise point based on the number of

points within a radius of eps around p.

6. If p is a core point, create a new cluster and add p and all its density-reachable neighbors

to the cluster.

7. If p is a border point, add it to any neighboring cluster that has not reached its

min_samples threshold.

8. Mark p as visited.

9. Repeat steps 4-8 until all data points have been visited.

10. Merge clusters that share border points.


Program:

from sklearn.cluster import DBSCAN

from sklearn.datasets import make_blobs

import matplotlib.pyplot as plt

# Generate some sample data

X, y = make_blobs(n_samples=1000, centers=3, random_state=42)

# Perform density-based clustering using the DBSCAN algorithm

db = DBSCAN(eps=0.5, min_samples=5).fit(X)

# Extract the labels and number of clusters

labels = db.labels_

n_clusters = len(set(labels)) - (1 if -1 in labels else 0)

# Plot the clustered data

plt.scatter(X[:,0], X[:,1], c=labels)

plt.title(f"DBSCAN clustering - {n_clusters} clusters")

plt.show()
Output :

RESULT :-
Thus the python program has been written and executed successfully.
Ex. No: 10 Implement EM for Bayesian networks

AIM:
To write a python program to solve Implement EM for Bayesian networks .

ALGORITHM:
1. Define the structure of the Bayesian network

2. Define the parameters of the network, such as the conditional probability tables (CPDs)

3. Generate some synthetic data for the network

4. Initialize the model parameters using maximum likelihood estimation

5. Repeat the following steps until convergence or a maximum number of iterations is

reached:

a) E-step: compute the expected sufficient statistics of the hidden variables given the

observed data and the current estimates of the parameters

b) M-step: update the parameters to maximize the expected log-likelihood of the

observed data under the current estimate of the hidden variables

6. Print the learned parameters


Program :

from pgmpy.models import BayesianModel

from pgmpy.estimators import MaximumLikelihoodEstimator

from pgmpy.inference import VariableElimination

import numpy as np

# Define the Bayesian network structure

model = BayesianModel([('C', 'F')])

# Define the parameters of the network

cpd_C = TabularCPD('C', 2, [[0.6], [0.4]])

cpd_F = TabularCPD('F', 2, [[0.8, 0.3], [0.2, 0.7]], evidence=['C'], evidence_card=[2])

model.add_cpds(cpd_C, cpd_F)

# Generate some synthetic data

data = pd.DataFrame({'C': np.random.choice([0, 1], size=100, p=[0.6, 0.4]),

'F': np.random.choice([0, 1], size=100, p=[0.8, 0.2])})

# Initialize the model parameters using maximum likelihood estimation

mle = MaximumLikelihoodEstimator(model, data)

cpd_C = mle.estimate_cpd('C')

cpd_F = mle.estimate_cpd('F', evidence=['C'])

# Define the EM algorithm

for i in range(10):

# E-step: compute the expected sufficient statistics of the hidden variable C

infer = VariableElimination(model)

evidence = data.to_dict('records')
qs = infer.query(['C'], evidence=evidence)

p_C = qs['C'].values

# M-step: update the model parameters

cpd_C = TabularCPD('C', 2, [p_C.tolist()])

cpd_F = mle.estimate_cpd('F', evidence=['C'], prior_params={'C': p_C})

# Update the model

model.add_cpds(cpd_C, cpd_F)

# Print the learned parameters

print(cpd_C)

print(cpd_F)
OUTPUT :

╒═════╤═══════╕

│ C_0 │ 0.686 │

├─────┼───────┤

│ C_1 │ 0.314 │

╘═════╧═══════╛

╒═════╤═════╤═════╕

│ C │ C_0 │ C_1 │

├─────┼─────┼─────┤

│ F_0 │ 0.8 │ 0.3 │

├─────┼─────┼─────┤

│ F_1 │ 0.2 │ 0.7 │

╘═════╧═════╧═════

RESULT :-
Thus the python program has been written and executed Successfully.
Ex. No: 11 Build simple NN models

AIM
To write a python program to solve Implement to build simple NN models.

ALGORITHM:

1. Import the necessary libraries: numpy, matplotlib, and sklearn.neural_network


(MLPClassifier).
2. Create a sample dataset x and y.
3. Create and train the model using MLPClassifier.
4. Make predictions by passing in the new data points.
5. Visualize the results .
Program

#import libraries

import numpy as np

import matplotlib.pyplot as plt

from sklearn.neural_network import MLPClassifier

#create a sample dataset

x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

y = np.array([0, 1, 1, 0])

#create and train the model

model = MLPClassifier(hidden_layer_sizes=(2), activation='relu', solver='lbfgs')

model.fit(x, y)

#make predictions

predictions = model.predict([[2, 2], [2, 3]])

print(predictions)

#visualize the results

plt.scatter(x[:,0], x[:,1], c=y)

plt.xlabel('x1')

plt.ylabel('x2')

plt.title('Neural Network Model')

plt.show()
Output:

RESULT :-
Thus the python program has been written and executed successfully.
Ex. No: 12 Build deep learning NN models

AIM:
To write a python program to solve Build deep learning NN models .

ALGORITHM:

1. Load the MNIST dataset using mnist.load_data() from the keras.datasets module.

2. Preprocess the data by reshaping the input data to a 1D array, converting the data type to

float32, normalizing the input data to values between 0 and 1, and converting the target

variable to categorical using np_utils.to_categorical().

3. Define the neural network architecture using the Sequential() class from Keras. The

model should have an input layer of 784 nodes, two hidden layers of 512 nodes each with

ReLU activation and dropout layers with a rate of 0.2, and an output layer of 10 nodes

with softmax activation.

4. Compile the model using compile() with 'categorical_crossentropy' as the loss

function, 'adam' as the optimizer, and 'accuracy' as the evaluation metric.

5. Train the model using fit() with the preprocessed training data, the batch size of 128, the

number of epochs of 10, and the validation data. Finally, evaluate the model using

evaluate() with the preprocessed test data and print the test loss and accuracy
Program :
import numpy as np

from keras.datasets import mnist

from keras.models import Sequential

from keras.layers import Dense, Dropout, Flatten

from keras.utils import np_utils

# Load MNIST dataset

(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Reshape the input data to a 1D array

X_train = X_train.reshape(X_train.shape[0], 784)

X_test = X_test.reshape(X_test.shape[0], 784)

# Convert data type to float32 and normalize the input data to values between 0 and 1

X_train = X_train.astype('float32')

X_test = X_test.astype('float32')

X_train /= 255

X_test /= 255

# Convert the target variable to categorical

y_train = np_utils.to_categorical(y_train, 10)

y_test = np_utils.to_categorical(y_test, 10)

# Define the model architecture

model = Sequential()

model.add(Dense(512, input_shape=(784,), activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(512, activation='relu'))

model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))

# Compile the model

model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])

# Train the model

model.fit(X_train, y_train, batch_size=128, epochs=10, verbose=1,


validation_data=(X_test,

y_test))

# Evaluate the model on the test data

score = model.evaluate(X_test, y_test, verbose=0)

print('Test loss:', score[0])

print('Test accuracy:', score[1])

Output:

Test loss: 0.067

Test accuracy: 0.978

RESULT :-
Thus the python program has been written and executed successfully.

You might also like