Lab Programs CS3491
Lab Programs CS3491
1a:
IMPLEMENTATION OF BASIC SEARCH STRATEGIES – BFS
AIM:
To implement a python program for Breadth First Search (BFS)
Breadth-First Search
ALGORITHM:
Step 1: We start the process by considering any random node as the starting
vertex.
Step 2: We enqueue (insert) it to the queue and mark it as visited.
Step 3: Then we mark and enqueue all of its unvisited neighbours at the current
depth or continue to the next depth level if there is any.
Step 4: The visited vertices are removed from the queue.
Step 5: The process ends when the queue becomes empty.
PROGRAM:
graph={ '5':
['3','7'],
'3':['2','4'],
'7':['8'],
'2':[],
1 CS3491 AI & ML
'4':['8'],
'8':[]
}
visited =[]
queue=[]
def bfs(visited,graph,node):
visited.append(node)
queue.append(node)
while queue:
m=queue.pop(0)
print(m, end="")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
print ("The Breadth-First Search")
bfs(visited,graph,'5')
OUTPUT:
RESULT:
Thus, the program for breadth-first search was implemented and executed
successfully.
2 CS3491 AI & ML
Ex.No.1b:
IMPLEMENTATION OF BASIC SEARCH STRATEGIES – DFS
AIM:
To implement a python code for Depth First Search (DFS)
ALGORITHM:
Step 1: Pick any node. If it is unvisited, mark it as visited and recur on all its adjacent nodes.
Step 2: Repeat until all the nodes are visited, or the node to be searched is found.
Step 4: The dfs function is called and is passed the Visited set, the graph in the form of a
dictionary, and A, which is the starting node.
1. It first checks if the current node is unvisited - if yes, it is appended in the Visited set.
2. Then for each neighbor of the current node, the dfs function is invoked again.
3. The base case is invoked when all the nodes are visited. The function then returns.
PROGRAM:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set()
3 CS3491 AI & ML
OUTPUT:
RESULT:
Thus, the program for depth-first search was implemented and executed successfully.
4 CS3491 AI & ML
Ex. No:2a
IMPLEMENTATION OF A* SEARCH ALGORITHM
AIM:
To implement A* search algorithm for finding a shortest path in a graph.
A* SEARCH:
A* search finds the shortest path through a search space to the goal state using
the heuristic function.
This technique finds minimal cost solutions and is directed to a goal state called
A* search.
The A* algorithm also finds the lowest-cost path between the start and goal state,
where changing from one state to another requires some cost.
STEPS FOR SOLVING A* SEARCH
Given the graph, find the cost-effective path from A to G. That is A is the source
node and G is the goal node.
A → B = g(B) + h(B) = 2 + 6 = 8
A → E = g(E) + h(E) = 3 + 7 = 10
Since the cost for A → B is less, we move forward with this path and compute the
f(x) for the children nodes of B.
Now from B, we can go to point C or G, so we compute f(x) for each of them,
A → B → C = (2 + 1) + 99= 102
A → B → G = (2 + 9 ) + 0 = 11
Here the path A → B → G has the least cost but it is still more than the cost of A →
E, thus we explore this path further.
Now from E, we can go to point D, so we compute f(x),
5 CS3491 AI & ML
A → E → D = (3 + 6) + 1 = 10
Comparing the cost of A → E → D with all the paths we got so far and as this cost is
least of all we move forward with this path.
Now compute the f(x) for the children of D
A → E → D → G = (3 + 6 + 1) +0 = 10
Now comparing all the paths that lead us to the goal, we conclude that A → E → D
→ G is the most cost-effective path to get from A to G.
ALGORITHM:
Step 1: Place the starting node into OPEN and find its f (n) value.
Step 2: Remove the node from OPEN, having the smallest f (n) value. If it is a goal node then
stop and return success.
Step 3: Else remove the node from OPEN, find all its successors.
Step 4: Find the f (n) value of all successors; place them into OPEN and place the removed
node into CLOSE.
Step 5: Go to Step-2.
Step 6: Exit.
PROGRAM:
def aStarAlgo(start_node, stop_node):
open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {} # parents contains an adjacency map of all nodes
#distance of starting node from itself is zero
g[start_node] = 0
#start_node is root node i.e it has no parent nodes
#so start_node is set to its own parent node
parents[start_node] = start_node
while len(open_set) > 0:
6 CS3491 AI & ML
n = None
#node with lowest f() is found
for v in open_set:
if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):
n=v
if n == stop_node or Graph_nodes[n] == None:
pass
else:
for (m, weight) in get_neighbors(n):
#nodes 'm' not in first and last set are added to
first #n is set its parent
if m not in open_set and m not in closed_set:
open_set.add(m)
parents[m] = n
g[m] = g[n] + weight
#for each node m,compare its distance from start i.e g(m) to the
#from start through n node
else:
if g[m] > g[n] + weight:
#update g(m)
g[m] = g[n] + weight
#change parent of m to n
parents[m] = n
#if m in closed set,remove and add to open
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None
7 CS3491 AI & ML
# if the current node is the stop_node
# then we begin reconstructing the path from it to the start_node
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found: {}'.format(path))
return path
# remove n from the open_list, and add it to closed_list
# because all of his neighbors were inspected
open_set.remove(n)
closed_set.add(n)
print('Path does not exist!')
return None
#define fuction to return neighbor and its distance
#from the passed node
def get_neighbors(v):
if v in
Graph_nodes:
return Graph_nodes[v]
else:
return None
#for simplicity we ll consider heuristic distances given
#and this function returns heuristic distance for all nodes
def heuristic(n):
H_dist =
{ 'A': 11,
'B': 6,
8 CS3491 AI & ML
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
#Describe your graph here
Graph_nodes = {
'A': [('B', 6), ('F', 3)],
'B': [('A', 6), ('C', 3), ('D', 2)],
'C': [('B', 3), ('D', 1), ('E', 5)],
'D': [('B', 2), ('C', 1), ('E', 8)],
'E': [('C', 5), ('D', 8), ('I', 5), ('J', 5)],
'F': [('A', 3), ('G', 1), ('H', 7)],
'G': [('F', 1), ('I', 3)],
'H': [('F', 7), ('I', 2)],
'I': [('E', 5), ('G', 3), ('H', 2), ('J', 3)],
}
aStarAlgo('A', 'J')
OUTPUT:
Path found: ['A', 'F', 'G', 'I', 'J']
RESULT:
Thus, the program for A* search algorithm was implemented and executed
successfully.
9 CS3491 AI & ML
Ex.No:2b
IMPLEMENTATION OF MEMORY BOUNDED A* ALGORITHM
AIM:
To implement memory bounded A* search algorithm for path finding problem.
Memory bounded A* Search:
Memory Bounded A* is a shortest path algorithm based on the A* algorithm.
The main advantage is that it uses a bounded memory, while the A* algorithm might
need exponential memory. All other characteristics of are inherited from A*.
This search is an optimal and complete algorithm for finding a least-cost path. Unlike
A*, it will not run out of memory, unless the size of the shortest path exceeds the
amount of space in available memory.
ALGORITHM:
Step 1: Works like A* until memory is full
Step 2: When memory is full, drop the leaf node with the highest f-value (the worst
leaf), keeping track of that worst value in the parent
Step 3: Complete if any solution is reachable
Step 4: Optimal if any optimal solution is reachable
Step 5: Otherwise, returns the best reachable solution
PROGRAM:
#Graph with Dictionary in format of ['Node', Weightage]
nodes = {
'A': [['B', 6], ['F', 3]],
'B': [['A', 6], ['C', 3], ['D', 2]],
'C': [['B', 3], ['D', 1], ['E', 5]],
'D': [['B', 2], ['C', 1], ['E', 8]],
'E': [['C', 5], ['D', 8], ['I', 5], ['J', 5]],
'F': [['A', 3], ['G', 1], ['H', 7]],
'G': [['F', 1], ['I', 3]],
'H': [['F', 7], ['I', 2]],
'I': [['G', 3], ['H', 2], ['E', 5], ['J', 3]],
'J': [['E', 5], ['I', 3]]
}
10 CS3491 AI & ML
'D' : 7,
'E' : 3,
'F' : 6,
'G' : 5,
'H' : 3,
'I' : 1,
'J' : 0
}
closed = closed[::-1]
min = 1000
for i in opened: #To get the minimum weight of the paths
if i[1] < min:
min = i[1]
lens = len(closed)
i=0
while i < lens-1:
nei = []
for j in nodes[closed[i]]:
nei.append(j[0])
if closed[i+1] not in nei:
del closed[i+1]
lens-=1
i+=1
closed = closed[::-1]
11 CS3491 AI & ML
return closed, min #Returns shortest path and the weight corresponding to it
OUTPUT:
RESULT:
Thus, the program for memory bounded A* search algorithm was implemented and
executed successfully.
12 CS3491 AI & ML
Ex.No:3
IMPLEMENTATION OF NAIVE BAYES MODEL
AIM:
To implement a program for Naive Bayes model
NAIVE BAYES CLASSIFIER ALGORITHM
Naive Bayes is among one of the very simple and powerful algorithms for
classification based on Bayes Theorem with an assumption of independence among
the predictors.
The Naive Bayes classifier assumes that the presence of a feature in a class is not
related to any other feature.
Naive Bayes is a classification algorithm for binary and multi-class classification
problems.
Bayes Theorem
Based on prior knowledge of conditions that may be related to an event, Bayes
theorem describes the probability of the event
conditional probability can be found this way
Assume we have a Hypothesis(H) and evidence(E),
According to Bayes theorem, the relationship between the probability of
Hypothesis before getting the evidence represented as P(H) and the
probability of the hypothesis after getting the evidence represented
as P(H|E) is:
P(H|E) = P(E|H)*P(H)/P(E)
ALGORITHM:
Step 1: Handling Data
Data is loaded from the .csv file and spread into training and tested assets.
Step 2: Summarizing the data
Summarise the properties in the training data set to calculate the probabilities and make
predictions.
Step 3: Making a Prediction
A particular prediction is made using a summarise of the data set to make a single prediction
Step 4: Making all the Predictions
Generate prediction given a test data set and a summarise data set.
Step 4: Evaluate Accuracy:
Accuracy of the prediction model for the test data set as a percentage correct out of them all
the predictions made.
13 CS3491 AI & ML
Step 4: Trying all together
Finally, we tie to all steps together and form our own model of Naive Bayes Classifier.
PROGRAM:
import pandas as pd
msg=pd.read_csv('C:/python/naivetext.csv',names=['message','label'])
print('The dimensions of the dataset',msg.shape)
msg['labelnum']=msg.label.map({'pos':1,'neg':0})
X=msg.message
y=msg.labelnum
print(X)
print(y)
OUTPUT:
The dimensions of the dataset (18, 2)
0 I love this sandwich
1 This is an amazing place
2 I feel very good about these beers
3 This is my best work
4 What an awesome view
5 I do not like this restaurant
6 I am tired of this stuff
7 I can't deal with this
8 He is my sworn enemy
9 My boss is horrible
10 This is an awesome place
11 I do not like the taste of this juice
12 I love to dance
13 I am sick and tired of this place
14 What a great holiday
15 That is a bad locality to stay
16 We will have good fun tomorrow
17 I went to my enemy's house today
Name: message, dtype: object
0 1
1 1
2 1
3 1
4 1
5 0
6 0
7 0
8 0
9 0
10 1
11 0
12 1
13 0
14 1
14 CS3491 AI & ML
15 0
16 1
17 0
Name: labelnum, dtype: int64
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest=train_test_split(X,y)
print ('\n The total number of Training Data :',ytrain.shape)
print ('\n The total number of Test Data :',ytest.shape)
OUTPUT:
The total number of Training Data : (13,)
15 CS3491 AI & ML
OUTPUT:
Confusion matrix
[[3 0]
[1 1]]
RESULT:
Thus, the program for Naive Bayes model was implemented and executed
successfully.
16 CS3491 AI & ML
Ex.No:4 IMPLEMENTATION OF BAYESIAN NETWORKS
AIM:
ALGORITHM:
Data set:heart.csv [use google colab and upload heart.csv in the drive
Use “pip install pgmpy” to import pgmpy]
PROGRAM:
import numpy as np
import pandas as pd
import csv
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.models import BayesianModel
from pgmpy.inference import VariableElimination
heartDisease = pd.read_csv('/content/drive/MyDrive/heart.csv')
heartDisease = heartDisease.replace('?',np.nan)
print('Sample instances from the dataset are given below')
print(heartDisease.head())
17 CS3491 AI & ML
Sample instances from the dataset are given below
age gender cp trestbps chol fbs restecg thalach exang oldpeak
model=BayesianModel([('age','heartdisease'),('gender','heartdisease'),('exang','heartdisease'),('
cp','heartdisease'),('heartdisease','restecg'),('heartdisease','chol')])
model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)
HeartDiseasetest_infer = VariableElimination(model)
q1=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'restecg':2})
print(q1)
18 CS3491 AI & ML
print('\n 2.Probability of HeartDisease given evidence= cp:2 ')
q2=HeartDiseasetest_infer.query(variables=['heartdisease'],evidence={'cp':2})
print(q2)
OUTPUT:
RESULT:
Thus, the program to construct a Bayesian network was implemented and executed
successfully.
19 CS3491 AI & ML
Ex.No:5a
AIM:
DEFINITION:
Let us consider a dataset where we have a value of response y for every feature x:
Now, the task is to find a line that fits best in the above scatter plot so that we can predict
the response for any new feature values. (i.e a value of x not present in a dataset)
This line is called a regression line.
20 CS3491 AI & ML
ALGORITHM:
# putting labels
plt.xlabel('x')
plt.ylabel('y')
21 CS3491 AI & ML
# function to show plot
plt.show()
def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
# estimating coefficients
b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
# plotting regression line
plot_regression_line(x, y, b)
OUTPUT:
RESULT:
Thus, the program to implement linear regression model was implemented and
executed successfully.
22 CS3491 AI & ML
Ex.No:5b
AIM:
23 CS3491 AI & ML
OUTPUT:
[[ 0.68072366]
[-0.806672 ]
[-0.25986635]
[-0.96951576]
[-1.55870949]
[-0.71107565]
[ 0.05858082]
[-2.06472972]
[-0.61592043]
[ 1.25423915]
[ 0.81852686]
[-1.65141186]
[-0.5894455 ]
[ 1.02745431]
[-0.32508896]
[-0.53886171]
[ 1.14821234]
[ 0.87538478]
[ 0.95887802]
[ 1.30514551]
[-1.02478688]
[ 0.16563384]
[ 0.77626036]
[-1.00622251]
[-0.55976575]
[ 1.33550038]
[ 1.60327317]
[ 1.82115858]
[-0.68603388]
[ 1.8733355 ]
[-0.52494619]
[-2.03314002]
[ 0.47001797]
[ 1.55400671]
[-1.34062378]
[-0.38624537]
[-1.06339387]
[-1.41465045]
[ 0.58850401]
[ 0.80925135]
[-0.82066568]
[-0.01262654]
[-0.75104194]
[-1.09609801]
[-0.30652093]
[-0.6945338 ]
[-0.90156651]
[-0.96587756]
[ 0.53851931]
[ 0.16533166]
[-1.04609567]
[-1.15065139]
[-0.76739642]
[ 0.83776929]
[ 2.20562241]
[-0.80368921]
24 CS3491 AI & ML
[-0.86160904]
[ 0.86032131]
[-0.65752318]
[ 1.81228279]
[-0.81507664]
[ 0.93532773]
[ 1.76874632]
[ 0.32893072]
[ 1.02960085]
[-1.84150254]
[ 0.16156709]
[-1.05944665]
[ 0.28788136]
[-1.05549933]
[ 1.37528673]
[ 1.66369265]
[ 1.71761177]
[ 1.96597594]
[-0.65315492]
[-0.29598263]
[-1.15345006]
[-1.03851861]
[ 1.69109822]
[ 1.92402678]
[-0.89593983]
[-0.58208549]
[-1.18750595]
[-1.06231671]
[-0.79230653]
[ 1.42147278]
[ 1.2887393 ]
[ 1.93706073]
[-1.03110736]
[-1.20543711]
[ 0.79446549]
[ 1.29599432]
[ 0.49396915]
[ 0.63241066]
[ 0.72416825]
[-1.76099355]
[-0.61639759]
[-0.43854548]
[ 1.43886371]
[-0.77167438]] [1 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 1 1 1 1 0 0 1 0 0 1 1
1 0 1 1 0 1 1 0 0 0
0 1 1 0 1 1 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 0 1 0 1 1 1 1 0 0 0 1 0 1 1
1 1
0 1 0 0 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 1 1 0 0 0 1 0]
25 CS3491 AI & ML
# Split the dataset into training and test dataset
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1)
x_train.shape
OUTPUT:
(75, 1)
log_reg=LogisticRegression()
log_reg.fit(x_train, y_train)
y_pred=log_reg.predict(x_test)
confusion_matrix(y_test, y_pred)
OUTPUT:
array([[12, 0],
[ 2, 11]], dtype=int64)
RESULT:
Thus, the program to implement logistic regression model was implemented and executed
successfully.
26 CS3491 AI & ML
Ex.No:6a IMPLEMENTATION OF DECISION TREE
AIM:
ALGORITHM:
PROGRAM:
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
def importdata():
balance_data = pd.read_csv(
'https://archive.ics.uci.edu/ml/machine-learning-'+
'databases/balance-scale/balance-scale.data',
27 CS3491 AI & ML
sep= ',', header = None)
# Printing the dataswet shape
print ("Dataset Length: ", len(balance_data))
print ("Dataset Shape: ", balance_data.shape)
def splitdataset(balance_data):
# Separating the target variable
X = balance_data.values[:, 1:5]
Y = balance_data.values[:, 0]
# Splitting the dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size = 0.3, random_state = 100)
return X, Y, X_train, X_test, y_train, y_test
# Function to perform training with giniIndex.
def train_using_gini(X_train, X_test,
y_train): # Creating the classifier
object
clf_gini = DecisionTreeClassifier(criterion = "gini", random_state =
100,max_depth=3, min_samples_leaf=5)
# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini
# Function to perform training with entropy.
def tarin_using_entropy(X_train, X_test, y_train):
# Decision tree with entropy
clf_entropy = DecisionTreeClassifier(criterion = "entropy", random_state = 100,
max_depth = 3, min_samples_leaf = 5)
28 CS3491 AI & ML
# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy
# Function to make predictions
def prediction(X_test,
clf_object):
# Predicton on test with giniIndex
y_pred = clf_object.predict(X_test)
print("Predicted values:")
print(y_pred)
return y_pred
# Function to calculate accuracy
def cal_accuracy(y_test, y_pred):
print("Confusion Matrix: ",confusion_matrix(y_test, y_pred))
print ("Accuracy : ",accuracy_score(y_test,y_pred)*100)
print("Report : ",classification_report(y_test, y_pred))
# Driver code
def main():
# Building Phase
data = importdata()
X, Y, X_train, X_test, y_train, y_test = splitdataset(data)
clf_gini = train_using_gini(X_train, X_test, y_train)
clf_entropy = tarin_using_entropy(X_train, X_test, y_train)
# Operational Phase
print("Results Using Gini Index:")
# Prediction using gini
y_pred_gini = prediction(X_test, clf_gini)
cal_accuracy(y_test, y_pred_gini)
print("Results Using Entropy:")
# Prediction using entropy
29 CS3491 AI & ML
y_pred_entropy = prediction(X_test, clf_entropy)
cal_accuracy(y_test, y_pred_entropy)
30 CS3491 AI & ML
Results Using Entropy:
Predicted values:
['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L
'
'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L
'
'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L
'
'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L
'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L
'
'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R
'
'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R
'
'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R
'
'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']
Confusion Matrix: [[ 0 6 7]
[ 0 63 22]
[ 0 20 70]]
Accuracy : 70.74468085106383
Report : precision recall f1-score support
RESULT:
Thus, the program to construct decision tree was implemented and executed successfully.
31 CS3491 AI & ML
Ex.No:6b IMPLEMENTATION OF RANDOM FOREST
AIM:
ALGORITHM:
%matplotlib inline
import matplotlib.pyplot as plt
plt.gray()
for i in range(4):
plt.matshow(digits.images[i])
32 CS3491 AI & ML
df = pd.DataFrame(digits.data)
df.head()
df['target'] = digits.target
df[0:12]
X = df.drop('target',axis='columns')
y = df.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=20)
model.fit(X_train, y_train)
model.score(X_test, y_test)
y_predicted = model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_predicted)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sn
plt.figure(figsize=(10,7))
sn.heatmap(cm, annot=True)
plt.xlabel('Predicted')
plt.ylabel('Truth')
33 CS3491 AI & ML
RESULT:
Thus, the program to construct random forest was implemented and executed successfully.
34 CS3491 AI & ML
Ex.No:7 IMPLEMENTATION OF SVM MODEL
AIM:
ALGORITHM:
PROGRAM:
import pandas as pd
iris = load_iris()
dir(iris)
iris.feature_names
df=pd.DataFrame(iris.data, columns=iris.feature_names)
df.head()
35 CS3491 AI & ML
df['target']=iris.target
df.head()
iris.target_names
df[df.target==2].head
df['flower_name']=df.target.apply(lambda x:iris.target_names[x])
df.head()
36 CS3491 AI & ML
from matplotlib import pyplot as plt
%matplotlib inline
df0=df[df.target==0]
df1=df[df.target==1]
df2=df[df.target==2]
df2.head()
37 CS3491 AI & ML
plt.scatter(df1['petal length (cm)'],df1['petal width (cm)'],color='blue',marker='*')
x = df.drop(['target','flower_name'], axis='columns')
y=df.target
train_test_split(x,y,test_size=0.2) len(x_train)
len(x_test)
model = SVC(kernel='linear')
model.fit(x_train, y_train)
model.score(x_test, y_test)
RESULT:
Thus, the program to construct SVM model was implemented and executed successfully.
38 CS3491 AI & ML
Ex.No:8 IMPLEMENTATION OF ENSEMBLING TECHNIQUES(BAGGING)
AIM:
ALGORITHM:
PROGRAM:
import pandas as pd
df = pd.read_csv("C:/python/pima.csv")
df.head()
df.isnull().sum()
39 CS3491 AI & ML
df.describe()
df.diabetes.value_counts()
X = df.drop("diabetes",axis="columns")
y = df.diabetes
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_scaled[:3]
X_train.shape
40 CS3491 AI & ML
X_test.shape
y_train.value_counts()
201/375
y_test.value_counts()
67/125
scores
scores.mean()
41 CS3491 AI & ML
from sklearn.ensemble import BaggingClassifier
bag_model = BaggingClassifier(
base_estimator=DecisionTreeClassifier(),
n_estimators=100,
max_samples=0.8,
oob_score=True,
random_state=0
bag_model.fit(X_train, y_train)
bag_model.oob_score_
bag_model.score(X_test, y_test)
bag_model = BaggingClassifier(
base_estimator=DecisionTreeClassifier(),
n_estimators=100,
max_samples=0.8,
oob_score=True,
random_state=0
42 CS3491 AI & ML
scores.mean()
RESULT:
Thus, the program to implement ensemble technique such as bagging was implemented and
executed successfully.
43 CS3491 AI & ML
Ex.No:9 IMPLEMENTATION OF CLUSTERING ALGORITHMS (K - MEANS)
AIM:
ALGORITHM:
Step 1: Accept the number of clusters to group data into and the dataset to cluster as input
values
Step 2: Initialize the first K clusters Take first k instances or Take Random sampling of k
elements
Step 3: Calculate the arithmetic means of each cluster formed in the dataset.
Step 4: K-means assigns each record in the dataset to only one of the initial clusters Each
record is assigned to the nearest cluster using a measure of distance (e.g Euclidean distance).
Step 5: K-means re-assigns each record in the dataset to the most similar cluster and re-
calculates the arithmetic mean of all the clusters in the dataset.
PROGRAM:
import pandas as pd
%matplotlib inline
iris = load_iris()
df = pd.DataFrame(iris.data,columns=iris.feature_names)
df.head()
44 CS3491 AI & ML
df['flower'] = iris.target
df.head()
df.head(3)
km = KMeans(n_clusters=3)
yp = km.fit_predict(df)
45 CS3491 AI & ML
df['cluster'] = yp
df.head(2)
df.cluster.unique()
df1 =
df[df.cluster==0] df2
= df[df.cluster==1]
df3 = df[df.cluster==2]
46 CS3491 AI & ML
sse = []
k_rng = range(1,10)
for k in k_rng:
km = KMeans(n_clusters=k)
km.fit(df)
sse.append(km.inertia_)
plt.xlabel('K')
plt.plot(k_rng,sse)
RESULT:
Thus, the program to implement K means clustering algorithm was implemented and executed
successfully.
47 CS3491 AI & ML
Ex. No: 10 EM ALGORITHM FOR BAYESIAN NETWORKS
AIM:
ALGORITHM:
Step 1: Given a set of incomplete data, start with a set of initialized parameters.
Step 2: Expectation step (E-step): In this expectation step, by using the observed available
data of the dataset, try to estimate or guess the values of the missing data. Finally, after this
step, complete data is obtained having no missing values.
Step 3: Maximization step (M-step): Use the complete data, which is prepared in the
expectation step, and update the parameters.
PROGRAM:
import numpy as np
np.random.seed(42)
true_prob_A = 0.6
sample_size = 1000
data_B = np.zeros(sample_size)
for i in range(sample_size):
# E-step: Expectation
48 CS3491 AI & ML
# Compute the expected values of hidden variables (none here)
return None
prob_A = np.mean(data_A)
data_B_given_A = data_B[data_A == a]
np.mean(data_B_given_A == 1)]
estimated_prob_A = 0.5
# Perform EM iterations
num_iterations = 10
for i in range(num_iterations):
hidden_vars=expectation_step(data_A,data_B,estimated_prob_A,estimated_prob_B_given_
A)
estimated_prob_A,estimated_prob_B_given_A=maximization_step(data_A,data_B,
hidden_vars)
print(estimated_prob_B_given_A)
49 CS3491 AI & ML
Estimated probability of A: 0.579
Estimated conditional probability of B given A:
[[0. 0. ]
[0.2970639 0.7029361]]
RESULT:
Thus, the program to implement the EM algorithm for Bayesian Network was implemented
and executed successfully.
50 CS3491 AI & ML
Ex.No:11 BUILDING SIMPLE NEURAL NETWORK
AIM:
ALGORITHM:
Step 2: Train the input values and obtain the output from the given dataset.
Step 3: Test the given dataset from the output obtained from the given dataset
Step 4: Obtain the forward and Backward pass from the trained dataset
PROGRAM
# importing dependancies
import numpy as np
def activation(x):
return 1 / (1 + np.exp(-x))
for i in range(15000):
# forward pass
output = activation(dot_product)
# backward pass.
51 CS3491 AI & ML
# 0.5 is the learning rate.
# OR of 1, 0 is 1
print(test_output)
OUTPUT:
RESULT:
Thus, the program to build a simple Neural Network model was implemented and executed
successfully.
52 CS3491 AI & ML
Ex.No:12 BUILDING DEEP LEARNING NEURAL NETWORK MODEL
AIM:
ALGORITHM:
Step 1: Load data from the test file using import loadtxt
Step 4: Load data from from the test file from the path 'C:/python/pima-indians-
diabetes.csv', delimiter=','
Step 5: Split the given dataset into input and output variables.
PROGRAM:
X = dataset[:,0:8]
y = dataset[:,8]
model = Sequential()
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
53 CS3491 AI & ML
# compile the keras model
_, accuracy = model.evaluate(X, y)
_, accuracy = model.evaluate(X, y,
# round predictions
54 CS3491 AI & ML
predictions = (model.predict(X) > 0.5).astype(int)
X = dataset[:,0:8]
y = dataset[:,8]
model = Sequential()
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
for i in range(5):
55 CS3491 AI & ML
RESULT:
Thus, the program to build a deep Neural Network model was implemented and executed
successfully.
56 CS3491 AI & ML