0% found this document useful (0 votes)
2 views25 pages

Program

This document outlines the implementation of various search algorithms and machine learning models using Python. It includes detailed steps and code for Breadth First Search (BFS), Depth First Search (DFS), A* search, memory-bounded A* search, Naïve Bayes models, Bayesian networks, regression models, and decision trees. Each section concludes with a successful result statement indicating the completion of the respective implementations.

Uploaded by

Jeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views25 pages

Program

This document outlines the implementation of various search algorithms and machine learning models using Python. It includes detailed steps and code for Breadth First Search (BFS), Depth First Search (DFS), A* search, memory-bounded A* search, Naïve Bayes models, Bayesian networks, regression models, and decision trees. Each section concludes with a successful result statement indicating the completion of the respective implementations.

Uploaded by

Jeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Exp.no.

1a Implementation of Uninformed search algorithms (BFS, DFS)

Aim:
To write a Python program to implement Breadth First Search (BFS).

Algorithm:

1. Start
2. Put any one of the graph’s vertices at the back of the queue.
3. Take the front item of the queue and add it to the visited list.
4. Create a list of that vertex's adjacent nodes. Add those which are not within the visited
list to the rear of the queue.
5. Continue steps 3 and 4 till the queue is empty.
6. Stop

Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node) queue.append(node)
while queue: # Creating loop to visit each node
m = queue.pop(0)
print (m, end = " ")

for neighbour in graph[m]:


if neighbour not in visited: visited.append(neighbour) queue.append(neighbour)# Driver Code
print("Following is the Breadth-First Search") bfs(visited, graph, '5') # function calling

Result:
Thus, the Python program to implement Breadth First Search (BFS) was developed
successfully.
Exp.no.1b Implementation of Uninformed search algorithms (BFS, DFS)

Aim:
To write a Python program to implement Depth First Search (DFS).

Algorithm:

1. Start
2. Put any one of the graph's vertex on top of the stack.
3. After that take the top item of the stack and add it to the visited list of the vertex.
4. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the
visited list of vertexes to the top of the stack.
5. Repeat steps 3 and 4 until the stack is empty.
6. Stop

Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}

visited = set() # Set to keep track of visited nodes of graph.


def dfs(visited, graph, node): #function for dfs
if node not in visited: print (node) visited.add(node)
for neighbour in graph[node]: dfs(visited, graph, neighbour)
# Driver Code
print("Following is the Depth-First Search") dfs(visited, graph, '5')

Result:
Thus the Python program to implement Depth First Search (DFS) was developed
successfully.
Exp.no.2a Implementation of Informed search algorithms (A*, memory-
bounded A*)

Aim:
To write a Python program to implement A* search algorithm.

Algorithm:
Step 1: Create a priority queue and push the starting node onto the queue.Initialize minimum
value (min_index) to location 0.
Step 2: Create a set to store the visited nodes.
Step 3: Repeat the following steps until the queue is empty:
: Pop the node with the lowest cost + heuristic from the queue.
: If the current node is the goal, return the path to the goal.
: If the current node has already been visited, skip it.
: Mark the current node as visited.
: Expand the current node and add its neighbors to the queue.
Step 4: If the queue is empty and the goal has not been found, return None (no path found).
Step 5: Stop

Program:
import heapq class Node:
def init (self, state, parent, cost, heuristic):
self.state = state self.parent = parent self.cost = cost self.heuristic = heuristic
def lt (self, other):
return (self.cost + self.heuristic) < (other.cost + other.heuristic)

def astar(start, goal, graph): heap = []


heapq.heappush(heap, (0, Node(start, None, 0, 0))) visited = set()
while heap:
(cost, current) = heapq.heappop(heap)

if current.state == goal: path = []


while current is not None: path.append(current.state)
current = current.parent # Return reversed path return path[::-1]

if current.state in visited: continue

visited.add(current.state)
for state, cost in graph[current.state].items(): if state not in visited:
heuristic = 0 # replace with your heuristic function
heapq.heappush(heap, (cost, Node(state, current, current.cost + cost, heuristic))) return None #
No path found
graph = {
'A': {'B': 1, 'D': 3},
'B': {'A': 1, 'C': 2, 'D': 4},
'C': {'B': 2, 'D': 5, 'E': 2},
'D': {'A': 3, 'B': 4, 'C': 5, 'E': 3},
'E': {'C': 2, 'D': 3}
}
start = 'A' goal = 'E'
result = astar(start, goal, graph) print(result)

Result:
Thus the python program for A* Search was developed and the output was verified
successfully.
Exp.no.2b Implementation of Informed search algorithms (A*, memory- bounded
A*)

Aim:
To write a Python program to implement memory- bounded A* search algorithm.

Algorithm:
Step 1: Create a priority queue and push the starting node onto the queue. Step 2: Create a set
to store the visited nodes.
Step 3: Set a counter to keep track of the number of nodes expanded.
Step 4: Repeat the following steps until the queue is empty or the node counter exceeds the
max_nodes:
: Pop the node with the lowest cost + heuristic from the queue.
: If the current node is the goal, return the path to the goal.
: If the current node has already been visited, skip it.
: Mark the current node as visited.
: Increment the node counter.
: Expand the current node and add its neighbors to the queue.
Step 5: If the queue is empty and the goal has not been found, return None (no path found).
Step 6: Stop

Program:
import heapq

class Node:
def __init__(self, state, parent, cost, heuristic):
self.state = state
self.parent = parent
self.cost = cost
self.heuristic = heuristic

def __lt__(self, other):


return (self.cost + self.heuristic) < (other.cost + other.heuristic)

def heuristic_function(state, goal):


""" Example heuristic function (Replace with a proper one if needed) """
return 0 # Replace this with an actual heuristic

def astar(start, goal, graph, max_nodes):


heap = []
heapq.heappush(heap, (0, Node(start, None, 0, heuristic_function(start, goal))))
visited = set()
node_counter = 0

while heap and node_counter < max_nodes:


(cost, current) = heapq.heappop(heap)

if current.state == goal:
path = []
while current is not None:
path.append(current.state)
current = current.parent
return path[::-1] # Return the reversed path

if current.state in visited:
continue

visited.add(current.state)
node_counter += 1

for state, cost in graph[current.state].items():


if state not in visited:
heuristic = heuristic_function(state, goal)
heapq.heappush(heap, (cost + current.cost + heuristic, Node(state, current,
current.cost + cost, heuristic)))

return None # No path found

# Example graph
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'C': 2, 'D': 5},
'C': {'A': 4, 'B': 2, 'D': 1},
'D': {'B': 5, 'C': 1}
}

# Driver Code
start = 'A'
goal = 'D'
max_nodes = 10
result = astar(start, goal, graph, max_nodes)
print("Path found using Memory Bounded A*:", result)

Result:
Thus the python program for memory-bounded A* search was developed and the
output was verified successfully.
Exp.no.3 Implement Naive Bayes models

Aim:
To write a python program to implement Naïve Bayes model.

Algorithm:

Step 1. Load the libraries: import the required libraries such as pandas, numpy, and sklearn.
Step 2. Load the data into a pandas dataframe.
Step 3. Clean and preprocess the data as necessary. For example, you can handle missing
values, convert categorical variables into numerical variables, and normalize the data.
Step 4. Split the data into training and test sets using the train_test_split function from scikit-
learn.
Step 5. Train the Gaussian Naive Bayes model using the training data.
Step 6. Evaluate the performance of the model using the test data and the accuracy_score
function from scikit-learn.
Step 7. Finally, you can use the trained model to make predictions on new data.

Program:
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction.text import CountVectorizer

# ==========================================
# 1️⃣ Gaussian Naïve Bayes (For Numeric Data)
# ==========================================
print("\n=== Gaussian Naïve Bayes (Numbers) ===")

# Sample numeric data (height, weight → class)


X_num = [[180, 75], [160, 55], [170, 65], [150, 50], [175, 68], [165, 60]]
y_num = [1, 0, 1, 0, 1, 0] # 1 = Male, 0 = Female

# Split the data


X_train, X_test, y_train, y_test = train_test_split(X_num, y_num, test_size=0.3,
random_state=42)

# Train Gaussian Naïve Bayes


gnb = GaussianNB()
gnb.fit(X_train, y_train)

# Predict & Evaluate


y_pred = gnb.predict(X_test)
print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}")

# ==========================================
# 2️⃣ Multinomial Naïve Bayes (For Text)
# ==========================================
print("\n=== Multinomial Naïve Bayes (Text) ===")

# Sample text data (Spam detection)


texts = ["Win a lottery now!", "Meeting at 10 AM", "Exclusive deal! Buy now!", "Hello, how
are you?"]
labels = [1, 0, 1, 0] # 1 = Spam, 0 = Not Spam

# Convert text into numerical format


vectorizer = CountVectorizer()
X_text = vectorizer.fit_transform(texts)

# Train Multinomial Naïve Bayes


mnb = MultinomialNB()
mnb.fit(X_text, labels)

# Predict & Evaluate


y_pred_text = mnb.predict(X_text)
print(f"Predicted Labels: {y_pred_text.tolist()}")

Result:
Thus the Python program for implementing Naïve Bayes model was developed and the
output was verified successfully.
Exp.no.4 BAYESIAN NETWORKS

Aim:
To write a python program to implement a Bayesian network for Traffic.

Algorithm:

Step 1. Start by importing the required libraries such as math and pomegranate.
Step 2. Define the discrete probability distribution for the guest's initial choice of door
Step 3. Define the discrete probability distribution for the prize door
Step 4. Define the conditional probability table for the door that Monty picks based on
the guest's choice and the prize door
Step 5. Create State objects for the guest, prize, and Monty's choice
Step 6. Create a Bayesian Network object and add the states and edges between them
Step 7. Bake the network to prepare for inference
Step 8. Use the predict_proba method to calculate the beliefs for a given set of
evidence
Step 9. Display the beliefs for each state as a string.
Step 10. Stop

Program:
import tkinter as tk
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference import VariableElimination

# Define Bayesian Network


model = BayesianNetwork([('Rain', 'Traffic'), ('Accident', 'Traffic')])
model.add_cpds(
TabularCPD('Rain', 2, [[0.7], [0.3]]),
TabularCPD('Accident', 2, [[0.9], [0.1]]),
TabularCPD('Traffic', 2, [[0.9, 0.6, 0.7, 0.1], [0.1, 0.4, 0.3, 0.9]], ['Rain', 'Accident'], [2, 2])
)
infer = VariableElimination(model)

# GUI function
def calculate():
result = infer.query(['Traffic'], evidence={'Rain': rain.get(), 'Accident': accident.get()})
label.config(text=f"No Traffic: {result.values[0]:.2f}\nTraffic: {result.values[1]:.2f}")

# GUI setup
root = tk.Tk()
rain, accident = tk.IntVar(), tk.IntVar()
tk.Label(root, text="Rain?").pack()
tk.Checkbutton(root, variable=rain).pack()
tk.Label(root, text="Accident?").pack()
tk.Checkbutton(root, variable=accident).pack()
tk.Button(root, text="Check", command=calculate).pack()
label = tk.Label(root, text="Traffic Probability")
label.pack()
root.mainloop()

Result:
Thus, the Python program for implementing Bayesian Networks was successfully
developed and the output was verified.
Exp.No.5 REGRESSION MODEL

Aim:
To write a Python program to build Regression models

Algorithm:
1. Import necessary libraries: numpy, pandas, matplotlib.pyplot, LinearRegression,
mean_squared_error, and r2_score.
2. Create a numpy array for waist and weight values and store them in separate variables.
3. Create a pandas DataFrame with waist and weight columns using the numpy arrays.
4. Extract input (X) and output (y) variables from the DataFrame.
5. Create an instance of LinearRegression model.
6. Fit the LinearRegression model to the input and output variables.
7. Create a new DataFrame with a single value of waist.
8. Use the predict() method of the LinearRegression model to predict the weight for the
new waist value.
9. Calculate the mean squared error and R-squared values using mean_squared_error()
and r2_score() functions respectively.
10. Plot the actual and predicted values using matplotlib.pyplot.scatter() and
matplotlib.pyplot.plot() functions.

Program:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

X = np.array([500, 600, 700, 800, 900, 1000, 1100, 1200]).reshape( -1, 1)


y = np.array([150, 180, 200, 220, 250, 280, 300, 320])

X_train, X_test, y_train, y_test = train_test_split(X, y, t est_size=0.2,


random_state=42)

model = LinearRegression()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)

plt.scatter(X, y, color='blue', label="Actual Data")


plt.plot(X, model.predict(X), color='red', label="Regression Line")
plt.xlabel("Square Footage")
plt.ylabel("House Price ($1000s)")
plt.title("Simple Linear Regression")
plt.legend()
plt.show()

print(f"Predicted Prices: {y_pred}")


Result:
Thus the Python program to build a simple linear Regression model was developed
successfully.
Exp.No.6 DECISION TREE AND RANDOM FOREST

Aim:
To write a Python program to build decision tree and random forest.

Algorithm:

1. Import necessary libraries: numpy, matplotlib, seaborn, pandas,


train_test_split,LabelEncoder, DecisionTreeClassifier, plot_tree, and
RandomForestClassifier.

2. Read the data from 'flowers.csv' into a pandas DataFrame.

3. Extract the features into an array X, and the target variable into an array y.

4 . Encode the target variable using the LabelEncoder.

5. Split the data into training and testing sets using train_test_split function.

6. Create a DecisionTreeClassifier object, fit the model to the training data, and visualize
the decision tree using plot_tree.

7. Create a RandomForestClassifier object with 100 estimators, fit the model to the
training data, and visualize the random forest by displaying 6 trees.

8. Print the accuracy of the decision tree and random forest models using the score
method on the test data.

Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score

data = {'Age': [25, 30, 35, 40, 45, 50, 55, 60],
'Salary': [30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000],
'Buy_Car': [0, 0, 0, 1, 1, 1, 1, 1]}

df = pd.DataFrame(data)

X = df[['Age', 'Salary']]
y = df['Buy_Car']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)


dt_model = DecisionTreeClassifier()
dt_model.fit(X_train, y_train)
y_pred_dt = dt_model.predict(X_test)
print(f"Decision Tree Accuracy: {accuracy_score(y_test, y_pred_dt):.2f}")

rf_model = RandomForestClassifier(n_estimators=5, random_state=42)


rf_model.fit(X_train, y_train)
y_pred_rf = rf_model.predict(X_test)
print(f"Random Forest Accuracy: {accuracy_score(y_test, y_pred_rf):.2f}")

plt.figure(figsize=(8,5))
plot_tree(dt_model, feature_names=['Age', 'Salary'], class_names=['No', 'Yes'], filled=True)
plt.title("Decision Tree")
plt.show()

Result:
Thus the Python program to build decision tree and random forest was developed
successfully.
Exp.No.7 SVM MODELS

Aim:
To write a Python program to build SVM model.

Algorithm:

1. Import the necessary libraries (matplotlib.pyplot, numpy, and svm from sklearn).

2. Define the features (X) and labels (y) for the fruit dataset.

3. Create an SVM classifier with a linear kernel using svm.SVC(kernel='linear').

4. Train the classifier on the fruit data using clf.fit(X, y).

5. Plot the fruits and decision boundary using plt.scatter(X[:, 0], X[:, 1], c=colors),
where colors is a list of colors assigned to each fruit based on its label.

6. Create a meshgrid to evaluate the decision function using


np.meshgrid(np.linspace(xlim[0], xlim[1], 100), np.linspace(ylim[0], ylim[1], 100)).

7. Use the decision function to create a contour plot of the decision boundary and
margins using ax.contour(xx, yy, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--']).

Step 8.Show the plot using plt.show().

Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score

data = {'Age': [22, 25, 30, 35, 40, 45, 50, 55, 60, 65],
'Salary': [25000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 110000],
'Buy_Car': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]}

df = pd.DataFrame(data)

X = df[['Age', 'Salary']]
y = df['Buy_Car']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

svm_model = SVC(kernel='linear')
svm_model.fit(X_train, y_train)
y_pred = svm_model.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)


print(f"SVM Model Accuracy: {accuracy:.2f}")

plt.scatter(df['Age'], df['Salary'], c=df['Buy_Car'], cmap='coolwarm', edgecolors='k')


plt.xlabel("Age")
plt.ylabel("Salary")
plt.title("SVM Classification (Buy Car)")
plt.show()

Result:
Thus, the Python program to build an SVM model was developed, and the output was
successfully verified.
Exp.no.8 Implement Ensembling techniques

Aim:

To implement the ensembling technique of Blending with the given Alcohol QCM
Dataset.

Algorithm:

1. Split the training dataset into train, test and validation dataset.
2. Fit all the base models using train dataset.
3. Make predictions on validation and test dataset.
4. These predictions are used as features to build a second level model
5. This model is used to make predictions on test and meta-features.

Program:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris

# Load the dataset


iris = load_iris()
X = iris.data[:, :2] # Use only the first two features for visualization
y = iris.target

# Split data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Train the Random Forest model


model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)

# Make predictions
y_pred = model.predict(X_test)

# Print accuracy
print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}")

# Visualize decision boundaries


x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.contourf(xx, yy, Z, alpha=0.3)


plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='k', marker='o')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.title("Random Forest Decision Boundary")
plt.show()

Result:
Thus the program to implement ensembling technique have been executed
successfully and the output got verfied.
Exp.No.9 Implement clustering algorithms

Aim:
To implment K Mean Clustering algorithm to classify the Iris Dataset.

Algorithm:
Step-1: Select the number K of the neighbors
Step-2: Calculate the Euclidean distance of K number of neighbors
Step-3: Take the K nearest neighbors as per the calculated Euclidean distance. Step-4:
Among these k neighbors, count the number of the data points in each category.
Step-5: Assign the new data points to that category for which the number of the neighbor
is maximum.
Step-6: Our model is ready.
Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler

# Load the Iris dataset


iris = load_iris()
df = pd.DataFrame(data=iris.data, columns=iris.feature_names)

# Standardize the data


scaler = StandardScaler()
df_scaled = scaler.fit_transform(df)

# Apply K-Means Clustering


kmeans = KMeans(n_clusters=3, random_state=42, n_init=10)
df['Cluster'] = kmeans.fit_predict(df_scaled)

# Plot clusters using first two features


plt.figure(figsize=(8, 6))
plt.scatter(df.iloc[:, 0], df.iloc[:, 1], c=df['Cluster'], cmap='viridis', edgecolors='k', marker='o')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.title("K-Means Clustering on Iris Dataset")
plt.colorbar(label="Cluster")
plt.show()

Result:
Thus the program to implement K-Means clustering Algorithm for Iris dataset have been
executed successfully and output got verified.
Exp.No.10 Implement EM for Bayesian Networks

Aim:
To write a python program to implement EM for Bayesian Networks.

Algorithm:

1. Start with a Bayesian network that has some variables that are not directly
observed. For example, suppose we have a Bayesian network with variables A, B,
C, and D, where A and D are observed and B and C are latent.
2. Initialize the parameters of the network. This includes the conditional
probabilities for each variable given its parents, as well as the prior
probabilities for the root variables.
3. E-step: Compute the expected sufficient statistics for the latent variables. This
involves computing the posterior probability distribution over the latent variables
given the observed data and the current parameter estimates. This can be done
using the forward-backward algorithm or the belief propagation algorithm.
4. M-step: Update the parameter estimates using the expected sufficient statistics
computed in step 3. This involves maximizing the likelihood of the data with
respect to the parameters of the network, given the expected sufficient
statistics.
5. Repeat steps 3-4 until convergence. Convergence can be measured by
monitoring the change in the log-likelihood of the data, or by monitoring the
change in the parameter estimates.
Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
from pgmpy.models import BayesianNetwork
from pgmpy.estimators import BayesianEstimator
from pgmpy.inference import VariableElimination

# Load dataset
file_path = "C:/Users/navee/OneDrive/Desktop/AIML PROGRAMS/heart.csv"
heartDisease = pd.read_csv(file_path)

# Ensure column names are lowercase


heartDisease.columns = heartDisease.columns.str.lower()

# Print dataset info


print("Columns in dataset:", heartDisease.columns)

# Define Bayesian Network structure


model = BayesianNetwork([
('age', 'target'), ('sex', 'target'), ('exang', 'target'),
('cp', 'target'), ('target', 'restecg'), ('target', 'chol')
])

# Train model using Expectation-Maximization (EM) via BayesianEstimator


model.fit(heartDisease, estimator=BayesianEstimator)

# Inference
infer = VariableElimination(model)

print("\nProbability of Heart Disease given evidence (restecg=1):")


q1 = infer.query(variables=['target'], evidence={'restecg': 1})
print(q1)

print("\nProbability of Heart Disease given evidence (cp=2):")


q2 = infer.query(variables=['target'], evidence={'cp': 2})
print(q2)

# Visualize Bayesian Network


plt.figure(figsize=(6, 4))
G = nx.DiGraph(model.edges())
nx.draw(G, with_labels=True, node_color='skyblue', edge_color='black', node_size=2000,
font_size=9)
plt.title("Bayesian Network Structure for Heart Disease Prediction")
plt.show()

Result:
Thus the Python program to Implement EM for Bayesian Networks was developed
successfully.
Exp.No.11 Build Simple NN Models

Aim:

To write a python program to build simple NN models.

Algorithm:

1. Define the input and output data.


2. Choose the number of layers and neurons in each layer. This depends on the
problem you are trying to solve.
3. Define the activation function for each layer. Common choices are ReĮU, sigmoid,
and tanh.
4. Initialize the weights and biases for each neuron in the network. This can be
done randomly or using a pre-trained model.
5. Define the loss function and optimizer to be used during training. The loss
function measures how well the model is doing, while the optimizer updates the
weights and biases to minimize the loss.
6. Train the model on the input data using the defined loss function and optimizer.
This involves forward propagation to compute the output of the model, and
backpropagation to compute the gradients of the loss with respect to the weights
and biases. The optimizer then updates the weights and biases based on the
gradients.
7. Evaluate the performance of the model on new data using metrics such as
accuracy, precision, recall, and F1 score.

Program:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
import random

# Set random seed for reproducibility


np.random.seed(42)
tf.random.set_seed(42)
random.seed(42)

# Generate some structured sample data


X = np.random.rand(1000, 10) # 1000 samples with 10 features
y = (X[:, 0] + X[:, 1] > 1).astype(int) # Labels based on first two features

# Create a simple neural network model


model = keras.Sequential([
layers.Dense(32, activation='relu', input_shape=(10,)),
layers.Dense(16, activation='relu'),
layers.Dense(1, activation='sigmoid') # Output layer for binary classification
])

# Compile the model


model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

# Add early stopping to prevent overfitting


early_stopping = keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True)

# Train the model


history = model.fit(X, y, epochs=10, validation_split=0.2, callbacks=[early_stopping])

# Plot the training history


plt.figure(figsize=(8, 5))
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.grid(True)
plt.show()

Result:
Thus the Python program to build simple NN Models was developed
successfully.
Exp.No.12 Deep Learning NN Models

Aim:

To write a python program to implement deep learning of NN models.


Algorithm:

1. Import the necessary libraries, such as numpy and keras.


2. Įoad or generate your dataset. This can be done using numpy or any other data
manipulation library.
3. Preprocess your data by performing any necessary normalization, scaling, or
other transformations.
4. Define your neural network architecture using the Keras Sequential API. Add
layers to the model using the add() method, specifying the number of units,
activation function, and input dimensions for each layer.
5. Compile your model using the compile() method. Specify the loss function,
optimizer, and any evaluation metrics you want to use.
6. Train your model using the fit() method. Specify the training data, validation
data, batch size, and number of epochs.
7. Evaluate your model using the evaluate() method. This will give you the loss and
accuracy metrics on the test set.
8. Use your trained model to make predictions on new data using the predict()
method.

Program:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

# Generate non-linear sample data (moons dataset)


X, y = make_moons(n_samples=1000, noise=0.2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Standardize the dataset


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Build a deep NN with Batch Normalization & Dropout


model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(2,)),
layers.BatchNormalization(),
layers.Dropout(0.3),
layers.Dense(32, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(0.3),
layers.Dense(1, activation='sigmoid')
])

# Compile the model with learning rate scheduling


lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=3,
verbose=1)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.01),
loss='binary_crossentropy',
metrics=['accuracy'])

# Train the model with early stopping


early_stopping = keras.callbacks.EarlyStopping(patience=5,
restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=50, validation_data=(X_test, y_test),
callbacks=[early_stopping, lr_scheduler])

# Plot training history


plt.figure(figsize=(8, 5))
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Deep Learning in Training and Validation Accuracy')
plt.legend()
plt.grid(True)
plt.show()

Result:
Thus the Python program to implement deep learning of NN Models was developed
successfully.

You might also like