0% found this document useful (0 votes)
18 views44 pages

Aiml Lab Manual

The document is a lab manual for the Artificial Intelligence and Machine Learning Lab (CS3491) for the III ECE VI Semester, covering various experiments such as search algorithms, Bayesian models, regression models, and neural networks. Each experiment includes aims, programming steps, and sample code implementations in Python. The manual is intended for the academic year 2024-2025 and follows the 2021 regulation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views44 pages

Aiml Lab Manual

The document is a lab manual for the Artificial Intelligence and Machine Learning Lab (CS3491) for the III ECE VI Semester, covering various experiments such as search algorithms, Bayesian models, regression models, and neural networks. Each experiment includes aims, programming steps, and sample code implementations in Python. The manual is intended for the academic year 2024-2025 and follows the 2021 regulation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

CS3491–ARTIFICIAL INTELLIGENCE

AND MACHINE LEARNING LAB

REGULATION -2021
III ECE- VI SEMESTER

LAB MANUAL

Academic Year: 2024-2025


SUBJECT CODE : CS3491
SUBJECT NAME : ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING LAB
YEAR : III
DEPARTMENT : ECE
SEMESTER : VI
REGULATION : 2021
PREPARED BY :
VERIFIED AND :
APPROVED BY

STUDENT NAME:

REGISTER NUMBER:

YEAR / DEPT/SEMESTER:
TITLE OF THE EXPERIMENT
1. Implementation of Uninformed search algorithms (BFS,
DFS)
2. Implementation of Informed search algorithms (A*,
memory-bounded A*)
3. Implement naïve Bayes models
4. Implement Bayesian Networks
5. Build Regression models
6. Build decision trees and random forests
7. Build SVM models
8. Implement ensembling techniques
9. Implement clustering algorithms
10. Implement EM for Bayesian networks
11. Build simple NN models
12. Build deep learning NN models

viii
III/VI SEM
S.No Date Experiment Name Marks Signature
S.No Date Experiment Name Marks Signature
EX.NO:1a
Implementation of uninformed search algorithm(BFS, DFS)
DATE:

AIM:
To Implement uninformed search algorithm BFS using python.

Programming Steps:
1. Start by putting any one of the graph’s vertices at the back of the queue.
2. Now take the front item of the queue and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add those which are not within the
visited list to the rear of the queue.
4. Keep continuing steps two and three till the queue is empty.

Program:
# Python3 Program to print BFS traversal
# from a given source vertex. BFS(int s)
# traverses vertices reachable from s.
from collections import defaultdict

# This class represents a directed graph


# using adjacency list representation
class Graph:

# Constructor
def init (self):

# default dictionary to store graph


self.graph = defaultdict(list)

# function to add an edge to graph


def addEdge(self,u,v):
self.graph[u].append(v)

# Function to print a BFS of graph


def BFS(self, s):

# Mark all the vertices as not visited


visited = [False] * (len(self.graph))

# Create a queue for BFS


queue = []

# Mark the source node as


# visited and enqueue it
queue.append(s)
visited[s] = True
while queue:

# Dequeue a vertex from


# queue and print it
s = queue.pop(0)
print (s, end = " ")

# Get all adjacent vertices of the


# dequeued vertex s. If a adjacent
# has not been visited, then mark it
# visited and enqueue it
for i in self.graph[s]:
if visited[i] == False:
queue.append(i)
visited[i] = True

# Driver code

# Create a graph given in


# the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

print ("Following is Breadth First Traversal" " (starting from vertex 2)")
g.BFS(2)

OUTPIT
Following is Breadth First Traversal (starting from vertex 2)
2031
Ex.no:1b
Implementation of uninformed search algorithm (DFS)
Date:

Aim:
To Implement uninformed search algorithm DFS using python

Programming Steps:
1. Start by putting any one of the graph's vertex on top of the stack.
2. After that take the top item of the stack and add it to the visited list of the
vertex.
3. Next, create a list of that adjacent node of the vertex. Add the ones which aren't
in the visited list of vertexes to the top of the stack.
4. Lastly, keep repeating steps 2 and 3 until the stack is empty

Program:
# Python program to print DFS traversal for complete graph
from collections import defaultdict

# This class represents a directed graph using adjacency


# list representation
class Graph:

# Constructor
def init (self):

# default dictionary to store graph


self.graph = defaultdict(list)

# function to add an edge to graph


def addEdge(self,u,v):
self.graph[u].append(v)

# A function used by DFS


def DFSUtil(self, v, visited):

# Mark the current node as visited and print it


visited[v]= True
print v,

# Recur for all the vertices adjacent to


# this vertex
for i in self.graph[v]:
if visited[i] == False:
self.DFSUtil(i, visited)
# The function to do DFS traversal. It uses
# recursive DFSUtil()
def DFS(self):
V = len(self.graph) #total vertices

# Mark all the vertices as not visited


visited =[False]*(V)

# Call the recursive helper function to print


# DFS traversal starting from all vertices one
# by one
for i in range(V):
if visited[i] == False:
self.DFSUtil(i, visited)

# Driver code
# Create a graph given in the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

print "Following is Depth First Traversal"


g.DFS()

output
Following is Depth First Traversal
0123
Ex.No:2

Date:

Implementation of informed search algorithms (A*, memory bounded)


Aim:
To implement and demonstrate the FIND-S algorithm for finding the most specific
hypothesis based on a given set of training data samples. Read the training data from a
.CSV file
Programming Steps:
Find-s Algorithm:
1. Load Data set
2. Initialize h to the most specific hypothesis in H
3. For each positive training instance x
• For each attribute constraint ai in h
If the constraint ai in h is satisfied by x then do nothing
else replace ai in h by the next more general constraint that is satisfied by x
4. Output hypothesis h

Program:
import random
import csv
def read_data(filename):
with open(filename, 'r') as csvfile:
datareader = csv.reader(csvfile, delimiter=',')
traindata = []
for row in datareader:
traindata.append(row)
return (traindata)
h=['phi','phi','phi','phi','phi','phi'
data=read_data('finds.csv')
def isConsistent(h,d):
if len(h)!=len(d)-1:
print('Number of attributes are not same in hypothesis.')
return False
else:
matched=0
for i in range(len(h)):
if ( (h[i]==d[i]) | (h[i]=='any') ):
matched=matched+1
if matched==len(h):
return True
else:
return False
def makeConsistent(h,d):
for i in range(len(h)):
if((h[i] == 'phi')):
h[i]=d[i]
elif(h[i]!=d[i]):
h[i]='any'
return h
print('Begin : Hypothesis :',h)
print('==========================================')
for d in data:
if d[len(d)-1]=='Yes':
if ( isConsistent(h,d)):
pass
else:
h=makeConsistent(h,d)
print ('Training data :',d)
print ('Updated Hypothesis :',h)
print()
print(' ')
print('==========================================')
print('maximally sepcific data set End: Hypothesis :',h)
Output:
Begin : Hypothesis : ['phi', 'phi', 'phi', 'phi', 'phi', 'phi']
==========================================
Training data : ['Cloudy', 'Cold', 'High', 'Strong', 'Warm', 'Change', 'Yes']
Updated Hypothesis : ['Cloudy', 'Cold', 'High', 'Strong', 'Warm', 'Change']

Training data : ['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']


Updated Hypothesis : ['any', 'any', 'any', 'Strong', 'Warm', 'any']

Training data : ['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same', 'Yes']


Updated Hypothesis : ['any', 'any', 'any', 'Strong', 'Warm', 'any']

Training data : ['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change', 'Yes']


Updated Hypothesis : ['any', 'any', 'any', 'Strong', 'any', 'any']

Training data : ['Overcast', 'Cool', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']


Updated Hypothesis : ['any', 'any', 'any', 'Strong', 'any', 'any']

==========================================
maximally sepcific data set End: Hypothesis : ['any', 'any', 'any', 'Strong', 'any',
'any']
OR
import csv
def loadCsv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = dataset[i]
return dataset
attributes = ['Sky','Temp','Humidity','Wind','Water','Forecast']
print('Attributes =',attributes)
num_attributes = len(attributes)
filename = "finds.csv"
dataset = loadCsv(filename)
print(dataset)
hypothesis=['0'] * num_attributes
print("Intial Hypothesis")
print(hypothesis)
print("The Hypothesis are")
for i in range(len(dataset)):
target = dataset[i][-1]
if(target == 'Yes'):
for j in range(num_attributes):
if(hypothesis[j]=='0'):
hypothesis[j] = dataset[i][j]
if(hypothesis[j]!= dataset[i][j]):
hypothesis[j]='?'
print(i+1,'=',hypothesis)
print("Final Hypothesis")
print(hypothesis)

Output:

Attributes = ['Sky', 'Temp', 'Humidity', 'Wind', 'Water', 'Forecast']


[['sky', 'Airtemp', 'Humidity', 'Wind', 'Water', 'Forecast', 'WaterSport'],
['Cloudy', 'Cold', 'High', 'Strong', 'Warm', 'Change', 'Yes'],
['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same', 'Yes'],
['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same', 'Yes'],
['Cloudy', 'Cold', 'High', 'Strong', 'Warm', 'Change', 'No'],
['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change', 'Yes'],
['Rain', 'Mild', 'High', 'Weak', 'Cool', 'Change', 'No'],
['Rain', 'Cool', 'Normal', 'Weak', 'Cool', 'Same', 'No'],
['Overcast', 'Cool', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']]
Intial Hypothesis
['0', '0', '0', '0', '0', '0']
The Hypothesis are
2 = ['Cloudy', 'Cold', 'High', 'Strong', 'Warm', 'Change']
3 = ['?', '?', '?', 'Strong', 'Warm', '?']
4 = ['?', '?', '?', 'Strong', 'Warm', '?']
6 = ['?', '?', '?', 'Strong', '?', '?']
9 = ['?', '?', '?', 'Strong', '?', '?']
Final Hypothesis
['?', '?', '?', 'Strong', '?', '?']
Ex.No:3

Date:

Implement naïve Bayes models


Aim:
Write a program to implement the naïve Bayesian classifier for a sample
training data set stored as a .CSV file. Compute the accuracy of the classifier,
considering few test data sets.

Programming Steps:
Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like Overcast probability =
0.29 and probability of playing is 0.64.
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each
class. The class with the highest posterior probability is the outcome of prediction.
Problem: Players will play if weather is sunny. Is this statement is correct?
We can solve it using above discussed method of posterior probability.
P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny)
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.
Naive Bayes uses a similar method to predict the probability of different class based on
various attributes. This algorithm is mostly used in text classification and with problems
having multiple classes.
Example: Continuous-valued Features
Temperature is naturally of continuous value.
Yes: 25.2, 19.3, 18.5, 21.7, 20.1, 24.3, 22.8, 23.1, 19.8
No: 27.3, 30.1, 17.4, 29.5, 15.1
Estimate mean and variance for each class
Learning Phase: output two Gaussian models for P(temp|C)

Program:
import csv
import random
import math
def loadCsv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
def splitDataset(dataset, splitRatio):
trainSize = int(len(dataset) * splitRatio)
trainSet = []
copy = list(dataset)
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return [trainSet, copy]
def separateByClass(dataset):
separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
def mean(numbers):
return sum(numbers)/float(len(numbers))
def stdev(numbers):
avg = mean(numbers)
variance = sum([pow(x-avg,2) for x in numbers])/float(len(numbers)-1)
return math.sqrt(variance)
def summarize(dataset):
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1]
return summaries
def summarizeByClass(dataset):
separated = separateByClass(dataset)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = summarize(instances)
return summaries
def calculateProbability(x, mean, stdev):
exponent = math.exp(-(math.pow(x-mean,2)/(2*math.pow(stdev,2))))
return (1 / (math.sqrt(2*math.pi) * stdev)) * exponent
def calculateClassProbabilities(summaries, inputVector):
probabilities = {}
for classValue, classSummaries in summaries.items():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, stdev = classSummaries[i]
x = inputVector[i]
probabilities[classValue] *= calculateProbability(x, mean, stdev)
return probabilities
def predict(summaries, inputVector):
probabilities = calculateClassProbabilities(summaries, inputVector)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.items():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
def getPredictions(summaries, testSet):
predictions = []
for i in range(len(testSet)):
result = predict(summaries, testSet[i])
predictions.append(result)
return predictions
def getAccuracy(testSet, predictions):
correct = 0
for i in range(len(testSet)):
if testSet[i][-1] == predictions[i]:
correct += 1
return (correct/float(len(testSet))) * 100.0
def main():
filename = 'data.csv'
splitRatio = 0.67
dataset = loadCsv(filename)
trainingSet, testSet = splitDataset(dataset, splitRatio)
print('Split {0} rows into train={1} and test={2} rows'.format(len(dataset),
len(trainingSet), len(testSet)))
# prepare model
summaries = summarizeByClass(trainingSet)
# test model
predictions = getPredictions(summaries, testSet)
accuracy = getAccuracy(testSet, predictions)
print('Accuracy: {0}%'.format(accuracy))
main()
OUTPUT :
Split 306 rows into train=205 and test=101 rows
Accuracy: 72.27722772277228%
Ex.No:4

Date:
Implement Bayesian Networks
Aim:
Write a program to construct a Bayesian network considering medical data.
Use this model to demonstrate the diagnosis of heart patients using standard Heart
Disease Data Set.
Programming Steps:
1. First, identify which are the main variable in the problem to solve. Each variable
corresponds to a node of the network. It is important to choose the number states
for each variable, for instance, there are usually two states (true or false).
2. Second, define structure of the network, that is, the causal relationships between
all the variables (nodes).
3. Third, define the probability rules governing the relationships between the
variables.
Program:
import numpy as np
from urllib.request import urlopen
import urllib
import pandas as pd
from pgmpy.inference import VariableElimination
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator
names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak',
'slope', 'ca', 'thal', 'heartdisease']
heartDisease = pd.read_csv('heart.csv', names = names)
heartDisease = heartDisease.replace('?', np.nan)
model = BayesianModel([('age', 'trestbps'), ('age', 'fbs'), ('sex', 'trestbps'), ('exang',
'trestbps'),('trestbps','heartdisease'),('fbs','heartdisease'),('heartdisease','restecg'),
('heartdisease','thalach'), ('heartdisease','chol')])
model.fit(heartDisease, estimator=MaximumLikelihoodEstimator)
from pgmpy.inference import VariableElimination
HeartDisease_infer = VariableElimination(model)
q = HeartDisease_infer.query(variables=['heartdisease'], evidence={'age': 37, 'sex'
:0})
print(q['heartdisease'])
OUTPUT:
╒════════════════╤════
│ heartdisease │ phi(heartdisease) │
╞══════════════════════
│ heartdisease_0 │ 0.5593 │
├─────────────────────┤
│ heartdisease_1 │ 0.4407
Ex.No:5

Date:
Build Regression models
Aim:

To implement linear regression using python.


Programming steps:
Let’s start with the simplest case, which is simple linear regression. There are five basic
steps when you’re implementing linear regression:
1. Import the packages and classes you need.
2. Provide data to work with and eventually do appropriate transformations.
3. Create a regression model and fit it with existing data.
4. Check the results of model fitting to know whether the model is satisfactory.
5. Apply the model for predictions

Program:

importnumpy as np importmatplotlib.pyplot as plt


defestimate_coef(x, y):
# number of observations/points n = np.size(x)
# mean of x and y vector
m_x, m_y = np.mean(x), np.mean(y)
# calculating cross-deviation and deviation aboutx SS_xy = np.sum(y*x) - n*m_y*m_x
SS_xx = np.sum(x*x) -n*m_x*m_x
# calculating regression coefficients b_1 =
SS_xy / SS_xx
b_0 = m_y - b_1*m_x return(b_0, b_1)
defplot_regression_line(x, y, b):
# plotting the actual points as scatter plot plt.scatter(x, y, color = "m",
marker = "o", s = 30)
# predicted response vector y_pred = b[0] + b[1]*x
# plotting the regression line plt.plot(x, y_pred, color = "g")
# putting labels plt.xlabel('x')
plt.ylabel('y')
# function to show plot plt.show()
def main():
# observations
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
# estimating coefficients b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
# plotting regression line plot_regression_line(x, y, b)
ifname == "main": main()
OUTPUT:
Estimated coefficients: b_0 = -0.05862068965
Ex.No:6

Date:
Build decision trees and random forests

Aim:
To Write a program to demonstrate the working of the decision tree based algorithm.
Use an appropriate data set for building the decision tree and apply this knowledge to
classify a new sample.

Programming Steps:
Step1 : Creating a root node
Entorpy(Entropy of whole data-set)
Entropy (S) = (-p/p+n)*log2 (p/p+n) - (n/n+p)*log2 ((n/n+p))
p- p stand for number of positive examples
n- n stand for number of negative examples.
Step2: For Every Attribute/Features
Average Information (AIG of a particular attribute)
I(Attribute) = Sum of {(pi+ni/p+n)*Entropy(Entropy of Attribute)}
pi- Here pi stand for number of positive examples in particular attribute.
ni- Here ni stand for number of negative examples in particular attribute.
Entropy (Attribute) - Entropy of Attribute calculated in same as we calculated for System
(Whole Data-Set)
Step 3: Information Gain
Gain = Entropy(S) - I (Attribute)
If all examples are positive, Return the single-node tree ,with label=+
If all examples are Negative, Return the single-node tree,with label= -
If Attribute empty, Return the single-node tree
Step4: Pick The Highest Gain Attribute
The attribute that has the most information gain has to create a group of all the its
attributes and process them in same as which we have done for the parent (Root) node.
Again, the feature which has maximum information gain will become a node and this
process will continue until we get the leaf node.
Step5: Repeat Until we get final node (Leaf node )

Program:
import pandas as pd
from pandas import DataFrame
#Reading Dataset
df_tennis = pd.read_csv('DS.csv')
print( df_tennis)

output:
#Function to calculate final Entropy
def entropy(probs):
import math
return sum( [-prob*math.log(prob, 2) for prob in probs] )
#Function to calculate Probabilities of positive and negative examples
def entropy_of_list(a_list):
from collections import Counter
cnt = Counter(x for x in a_list) #Count the positive and negative ex
num_instances = len(a_list)
#Calculate the probabilities that we required for our entropy formula
probs = [x / num_instances for x in cnt.values()]
#Calling entropy function for final entropy
return entropy(probs)
total_entropy = entropy_of_list(df_tennis['PT'])
print("\n Total Entropy of PlayTennis Data Set:",total_entropy)
#Defining Information Gain Function
def information_gain(df, split_attribute_name, target_attribute_name, trace=0):
print("Information Gain Calculation of ",split_attribute_name)
print("target_attribute_name",target_attribute_name)
#Grouping features of Current Attribute
df_split = df.groupby(split_attribute_name)
for name,group in df_split:
print("Name: ",name)
print("Group: ",group)
nobs = len(df.index) * 1.0
print("NOBS",nobs)

#Calculating Entropy of the Attribute and probability part of formula


df_agg_ent = df_split.agg({target_attribute_name : [entropy_of_list, lambda x:
len(x)/nobs] })[target_attribute_name]
print("df_agg_ent",df_agg_ent)
# Calculate Information Gain
avg_info = sum( df_agg_ent['Entropy'] * df_agg_ent['Prob1'] )
old_entropy = entropy_of_list(df[target_attribute_name])
return old_entropy - avg_info
print('Info-gain for Outlook is :'+str(information_gain(df_tennis, 'Outlook', 'PT')),"\n")
output:
Ex.No:7

Date:
Build SVM models
Aim:

Programming Steps:
1. In the multidimensional plane, we refer these points(unique data in a dataset) as
Vectors.
SVM algorithm first calculates maximum margin using two closest opposing
vectors.
2. After that, the maximum margin distance is divided into two parts in order to
find the Hyperplane.
3. The Hyperplane is equidistance from the selected opposing vectors, hence these
vectors are called as Support Vectors.
4. Since this algorithm is completely dependent on the support vectors, therefore
it’s named as Support Vector Machine.
Program:

Importing the dataset

import pandas as pd
data = pd.read_csv("apples_and_oranges.csv")

Splitting the dataset into training and test samples

from sklearn.model_selection import


train_test_split
training_set, test_set = train_test_split(data,
test_size = 0.2, random_state = 1)

Classifying the predictors and target

X_train = training_set.iloc[:,0:2].values
Y_train = training_set.iloc[:,2].values
X_test = test_set.iloc[:,0:2].values
Y_test = test_set.iloc[:,2].values

Initializing Support Vector Machine and fitting the training


data
from sklearn.svm import SVC
classifier = SVC(kernel='rbf', random_state = 1)
classifier.fit(X_train,Y_train)

Predicting the classes for test set

Y_pred = classifier.predict(X_test)

Attaching the predictions to test set for comparing

test_set["Predictions"] = Y_pred

Comparing the actual classes and predictions

Let’s have a look at the test_set:

Comparing the ‘Class’ and ‘Predictions’ column we find that only


one of the 8 predictions has gone wrong.

Calculating the accuracy of the predictions

We will calculate the accuracy using the confusion matrix as


follows :

from sklearn.metrics import confusion_matrix


cm = confusion_matrix(Y_test,Y_pred)
accuracy =
float(cm.diagonal().sum())/len(Y_test)
print("\nAccuracy Of SVM For The Given Dataset :
", accuracy)

Output:

Accuracy Of SVM For The Given Dataset : 0.875

:
Visualizing the classifier

Before we visualize we might need to encode the classes ‘apple’


and ‘orange’ into numericals.We can achieve that using the label
encoder.

from sklearn.preprocessing import LabelEncoder


le = LabelEncoder()
Y_train = le.fit_transform(Y_train)

After encoding , fit the encoded data to the SVM

from sklearn.svm import SVC


classifier = SVC(kernel='rbf', random_state = 1)
classifier.fit(X_train,Y_train)

Let’s Visualize!

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
plt.figure(figsize = (7,7))
X_set, y_set = X_train, Y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:,
0].min() - 1, stop = X_set[:, 0].max() + 1, step
= 0.01), np.arange(start = X_set[:, 1].min() -
1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2,
classifier.predict(np.array([X1.ravel(),
X2.ravel()]).T).reshape(X1.shape), alpha = 0.75,
cmap = ListedColormap(('black', 'white')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set ==
j, 1], c = ListedColormap(('red', 'orange'))(i),
label = j)
plt.title('Apples Vs Oranges')
plt.xlabel('Weight In Grams')
plt.ylabel('Size in cm')
plt.legend()
plt.show()

Output :
Ex.No:8a

Date:
Implement ensembling techniques

Aim:

To Implement ensembling techniques- Stacking

Programming Steps:

1. Split the train dataset into n parts


2. A base model (say linear regression) is fitted on n-1 parts and predictions are made
for the nth part. This is done for each one of the n part of the train set.
3. The base model is then fitted on the whole train dataset.
4. This model is used to predict the test dataset.
5. The Steps 2 to 4 are repeated for another base model which results in another set of
predictions for the train and test dataset.
6. The predictions on train data set are used as a feature to build the new model.
7. This final model is used to make the predictions on test dataset
Program:

# importing utility modules


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# importing machine learning models for prediction


from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.linear_model import LinearRegression

# importing stacking lib


from vecstack import stacking

# loading train data set in dataframe from train_data.csv file


df = pd.read_csv("train_data.csv")

# getting target data from the dataframe


target = df["target"]

# getting train data from the dataframe


train = df.drop("target")

# Splitting between train data into training and validation dataset


X_train, X_test, y_train, y_test = train_test_split(
train, target, test_size=0.20)
# initializing all the base model objects with default parameters
model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()

# putting all base model objects in one list


all_models = [model_1, model_2, model_3]

# computing the stack features


s_train, s_test = stacking(all_models, X_train, X_test,
y_train, regression=True, n_folds=4)

# initializing the second-level model


final_model = model_1

# fitting the second level model with stack features


final_model = final_model.fit(s_train, y_train)

# predicting the final output using stacking


pred_final = final_model.predict(X_test)

# printing the mean squared error between real value and predicted value
print(mean_squared_error(y_test, pred_final))

Output:
4510
Ex.No:8b

Date:
Implement ensembling techniques

Aim:

To Implement ensembling techniques- Bagging

Programming Steps:

1. Create multiple datasets from the train dataset by selecting observations with
replacements
2. Run a base model on each of the created
datasets(https://archive.ics.uci.edu/ml/datasets/Alcohol+QCM+Sensor+Dataset )
independently
3. Combine the predictions of all the base models to each the final output

Program:

# importing utility modules


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# importing machine learning models for prediction


import xgboost as xgb

# importing bagging module


from sklearn.ensemble import BaggingRegressor

# loading train data set in dataframe from train_data.csv file


df = pd.read_csv("train_data.csv")

# getting target data from the dataframe


target = df["target"]

# getting train data from the dataframe


train = df.drop("target")

# Splitting between train data into training and validation dataset


X_train, X_test, y_train, y_test = train_test_split(
train, target, test_size=0.20)
# initializing the bagging model using XGboost as base model with default
parameters
model = BaggingRegressor(base_estimator=xgb.XGBRegressor())

# training model
model.fit(X_train, y_train)

# predicting the output on the test dataset


pred = model.predict(X_test)

# printing the mean squared error between real value and predicted value
print(mean_squared_error(y_test, pred_final))

Output:

4666
Ex.No:8c

Date:
Implement ensembling techniques

Aim:

To Implement ensembling techniques- Boosting

Programming Steps:

1. Take a subset of the train dataset.


2. Train a base model on that dataset.
3. Use third model to make predictions on the whole dataset.
4. Calculate errors using the predicted values and actual values.
5. Initialize all data points with same weight.
6. Assign higher weight to incorrectly predicted data points.
7. Make another model, make predictions using the new model in such a way that
errors made by the previous model are mitigated/corrected.
8. Similarly, create multiple models–each successive model correcting the errors of
the previous model.
9. The final model (strong learner) is the weighted mean of all the previous models
(weak learners).

Program:

# importing utility modules


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# importing machine learning models for prediction


from sklearn.ensemble import GradientBoostingRegressor

# loading train data set in dataframe from train_data.csv file


df = pd.read_csv("train_data.csv")

# getting target data from the dataframe


target = df["target"]

# getting train data from the dataframe


train = df.drop("target")

# Splitting between train data into training and validation dataset


X_train, X_test, y_train, y_test = train_test_split(
train, target, test_size=0.20)

# initializing the boosting module with default parameters


model = GradientBoostingRegressor()
# training the model on the train dataset
model.fit(X_train, y_train)

# predicting the output on the test dataset


pred_final = model.predict(X_test)

# printing the mean squared error between real value and predicted value
print(mean_squared_error(y_test, pred_final))

Output:

4789
Ex.No:9

Date: Implementation of k means clustering algorithm

Aim: To implement the k means clustering algorithm using python

Programming Steps:
For a given dataset, k is specified to be the number of distinct groups the points
belong to. These k centroids are first randomly initialized, then iterations are
performed to optimize the locations of these k centroids as follows:

1. The distance from each point to each centroid is calculated.


2. Points are assigned to their nearest centroid.
3. Centroids are shifted to be the average value of the points belonging to it. If
the centroids did not move, the algorithm is finished, else repeat.
Program:

# k-means clustering
from numpy import unique
from numpy import where
from sklearn.datasets import make_classification
from sklearn.cluster import KMeans
from matplotlib import pyplot
# define dataset
X, _ = make_classification(n_samples=1000, n_features=2, n_informative=2,
n_redundant=0, n_clusters_per_class=1, random_state=4)
# define the model
model = KMeans(n_clusters=2)
# fit the model
model.fit(X)
# assign a cluster to each example
yhat = model.predict(X)
# retrieve unique clusters
clusters = unique(yhat)
# create scatter plot for samples from each cluster
for cluster in clusters:
# get row indexes for samples with this cluster
row_ix = where(yhat == cluster)
# create scatter of these samples
pyplot.scatter(X[row_ix, 0], X[row_ix, 1])
# show the plot
pyplot.show()
Output: Scatter Plot of Dataset With Clusters Identified Using K-Means
Clustering
Ex.No:10

Date: Implement EM for Bayesian networks

Aim: Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same
data set for clustering using k-Means algorithm. Compare the results of these two
algorithms and comment on the quality of clustering.

Programming Steps:

1. Load data set


2. Clusters the data into k groups where k is predefined.
3. Select k points at random as cluster centers.
4. Assign objects to their closest cluster center according to the Euclidean
distance function.
5. Calculate the centroid or mean of all objects in each cluster.
6. Repeat steps 3, 4 and 5 until the same points are assigned to each cluster in
consecutive rounds
Program:

import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.mixture import GaussianMixture
import pandas as pd
X=pd.read_csv("kmeansdata.csv")
x1 = X['Distance_Feature'].values
x2 = X['Speeding_Feature'].values
X = np.array(list(zip(x1, x2))).reshape(len(x1), 2)
plt.plot()
plt.xlim([0, 100])
plt.ylim([0, 50])
plt.title('Dataset')
plt.scatter(x1, x2)
plt.show()

#code for EM
gmm = GaussianMixture(n_components=3)
gmm.fit(X)
em_predictions = gmm.predict(X)
print("\nEM predictions")
print(em_predictions)
print("mean:\n",gmm.means_)
print('\n')
print("Covariances\n",gmm.covariances_)
print(X)
plt.title('Exceptation Maximum')
plt.scatter(X[:,0], X[:,1],c=em_predictions,s=50)
plt.show()
#code for Kmeans
import matplotlib.pyplot as plt1
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
print(kmeans.cluster_centers_)
print(kmeans.labels_)
plt.title('KMEANS')
plt1.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='rainbow')
plt1.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1],
color='black')

OUTPUT

EM predictions
[0 0 0 1 0 1 1 1 2 1 2 2 1 1 2 1 2 1 0 1 0 1 1]
mean:
[[57.70629058 25.73574491]
[52.12044022 22.46250453]
[46.4364858 39.43288647]]
Covariances
[[[83.51878796 14.926902 ]
[14.926902 2.70846907]]
[[29.95910352 15.83416554]
[15.83416554 67.01175729]]
[[79.34811849 29.55835938]
[29.55835938 18.17157304]]]
[[71.24 28. ]
[52.53 25. ]
[64.54 27. ]
[55.69 22. ]
[54.58 25. ]
[41.91 10. ]
[58.64 20. ]
[52.02 8. ]
[31.25 34. ]
[44.31 19. ]
[49.35 40. ]
[58.07 45. ]
[44.22 22. ]
[55.73 19. ]
[46.63 43. ]
[52.97 32. ]
[46.25 35. ]
[51.55 27. ]
[57.05 26. ]
[58.45 30. ]
[43.42 23. ]
[55.68 37. ]
[55.15 18. ]
Ex.No:11

Date: Build simple NN models

Aim: To implement the finite words classification system using Back-propagation


algorithm.

Programming Steps:

1. Let us take a look at how back propagation works. It has four layers: input
layer, hidden layer, hidden layer II and final output layer.
2. So, the main three layers are:
o Input layer
o Hidden layer
o Output layer
3. Each layer has its own way of working and its own way to take action such
that we are able to get the desired results and correlate these scenarios to our
conditions.

Program:

import pandas as pd
fromsklearn.model_selection import train_test_split
fromsklearn.feature_extraction.text import
CountVectorizer from sklearn.neural_network
import MLPClassifier
fromsklearn.metrics import accuracy_score, confusion_matrix, precision_score,
recall_score
msg = pd.read_csv('document.csv',
names=['message', 'label']) print("Total Instances of
Dataset: ", msg.shape[0]) msg['labelnum'] =
msg.label.map({'pos': 1, 'neg': 0})
X =msg.me
ssage y=
msg.lab
elnum
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y)
count_v = CountVectorizer()
Xtrain_dm =
count_v.fit_transform(Xtrain)
Xtest_dm =
count_v.transform(Xtest)
df = pd.DataFrame(Xtrain_dm.toarray(),columns=count_v.get_feature_names())
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,hidden_layer_sizes=(5, 2),
random_state=1)
clf.fit(Xtrain_dm, ytrain) pred = clf.predict(Xtest_dm)
print('Accuracy Metrics:')
print('Accuracy: ', accuracy_score(ytest, pred)) print('Recall: ', recall_score(ytest,
pred))
print('Precision: ', precision_score(ytest, pred))
print('Confusion Matrix: \n', confusion_matrix(ytest, pred))
document.csv:
I love this
sandwich,pos This
is an
amazingplace,pos
I feel very good about these
beers,pos This is my best
work,pos
What an awesome view,pos
I do not like this
restaurant,neg I am
tired of this stuff,neg
I can't deal with
this,neg He is
my sworn
enemy,neg My
boss is
horrible,neg
This is an awesome place,pos
I do not like the taste of
this juice,neg I love to
dance,pos
I am sick and tired of this
place,neg What a great
holiday,pos
That is a bad locality to stay,neg
We will have good fun
tomorrow,pos I went to my
enemy's house today,neg
OUTPUT:
Total Instances of
Dataset: 18
Accuracy Metrics:
Accuracy: 0.8
Recall: 1.0
Precisio
n: 0.75
Confusi
on
Matrix:
[[1 1]
[0 3]
Ex.No:12

Date: Build deep learning NN models

Aim: To implement the finite words classification system using Back-propagation


algorithm.

Programming Steps:

1. Line up the feature and the image.


2. Multiply each image pixel by corresponding feature pixel.
3. Add the values and find the sum.
4. Divide the sum by the total number of pixels in the feature.
https://www.edureka.co/blog/convolutional-neural-network/

Program:
import numpy as np
import tensorflow as tf
from time import time
import math
from include.data import get_data_set
from include.model import model, lr
train_x, train_y = get_data_set("train")
test_x, test_y = get_data_set("test")
tf.set_random_seed(21)
x, y, output, y_pred_cls, global_step, learning_rate = model()
global_accuracy = 0
epoch_start = 0
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output,
labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
beta1=0.9,
beta2=0.999,
epsilon=1e-08).minimize(loss, global_step=global_step)

correct_prediction = tf.equal(y_pred_cls, tf.argmax(y, axis=1))


accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
merged = tf.summary.merge_all()
saver = tf.train.Saver()
sess = tf.Session()
train_writer = tf.summary.FileWriter(_SAVE_PATH, sess.graph)
try:
print("
Trying to restore last checkpoint ...")
last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=_SAVE_PATH)
saver.restore(sess, save_path=last_chk_path)
print("Restored checkpoint from:", last_chk_path)
except ValueError:
print("
Failed to restore checkpoint. Initializing variables instead.")
sess.run(tf.global_variables_initializer())
def train(epoch):
global epoch_start
epoch_start = time()
batch_size = int(math.ceil(len(train_x) / _BATCH_SIZE))
i_global = 0
for s in range(batch_size):
batch_xs = train_x[s*_BATCH_SIZE: (s+1)*_BATCH_SIZE]
batch_ys = train_y[s*_BATCH_SIZE: (s+1)*_BATCH_SIZE]
start_time = time()
i_global, _, batch_loss, batch_acc = sess.run(
[global_step, optimizer, loss, accuracy],
feed_dict={x: batch_xs, y: batch_ys, learning_rate: lr(epoch)})
duration = time() - start_time
if s % 10 == 0:
percentage = int(round((s/batch_size)*100))
bar_len = 29
filled_len = int((bar_len*int(percentage))/100)
bar = '=' * filled_len + '>' + '-' * (bar_len - filled_len)
msg = "Global step: {:>5} - [{}] {:>3}% - acc: {:.4f} - loss: {:.4f} - {:.1f}
sample/sec"
print(msg.format(i_global, bar, percentage, batch_acc, batch_loss,
_BATCH_SIZE / duration))
test_and_save(i_global, epoch)
def test_and_save(_global_step, epoch):
global global_accuracy
global epoch_start
i=0
predicted_class = np.zeros(shape=len(test_x), dtype=np.int)
while i < len(test_x): j = min(i + _BATCH_SIZE, len(test_x)) batch_xs =
test_x[i:j, :] batch_ys = test_y[i:j, :] predicted_class[i:j] = sess.run( y_pred_cls,
feed_dict={x: batch_xs, y: batch_ys, learning_rate: lr(epoch)} ) i = j correct =
(np.argmax(test_y, axis=1) == predicted_class) acc = correct.mean()*100
correct_numbers = correct.sum() hours, rem = divmod(time() - epoch_start, 3600)
minutes, seconds = divmod(rem, 60) mes = " Epoch {} - accuracy: {:.2f}% ({}/{})
- time: {:0>2}:{:0>2}:{:05.2f}"
print(mes.format((epoch+1), acc, correct_numbers, len(test_x), int(hours),
int(minutes), seconds))
if global_accuracy != 0 and global_accuracy < acc: summary =
tf.Summary(value=[ tf.Summary.Value(tag="Accuracy/test", simple_value=acc),
]) train_writer.add_summary(summary, _global_step) saver.save(sess,
save_path=_SAVE_PATH, global_step=_global_step) mes = "This epoch receive
better accuracy: {:.2f} > {:.2f}. Saving session..."
print(mes.format(acc, global_accuracy))
global_accuracy = acc
elif global_accuracy == 0:
global_accuracy = acc
print("###########################################################
################################################")
def main():
train_start = time()
for i in range(_EPOCH):
print("
Epoch: {}/{}
".format((i+1), _EPOCH))
train(i)
hours, rem = divmod(time() - train_start, 3600)
minutes, seconds = divmod(rem, 60)
mes = "Best accuracy pre session: {:.2f}, time: {:0>2}:{:0>2}:{:05.2f}"
print(mes.format(global_accuracy, int(hours), int(minutes), seconds))
if name == " main ":
main()
sess.close()

Output:

Epoch: 60/60

Global step: 23070 - [>------------------------------------------------------------- ] 0% - acc: 0.9531 - loss:


1.5081 - 7045.4 sample/sec
Global step: 23080 - [>------------------------------------------------------------- ] 3% - acc: 0.9453 - loss:
1.5159 - 7147.6 sample/sec
Global step: 23090 - [=> ----------------------------------------------------------- ] 5% - acc: 0.9844 - loss:
1.4764 - 7154.6 sample/sec
Global step: 23100 - [==> --------------------------------------------------------- ] 8% - acc: 0.9297 - loss:
1.5307 - 7104.4 sample/sec
Global step: 23110 - [==> --------------------------------------------------------- ] 10% - acc: 0.9141 - loss:
1.5462 - 7091.4 sample/sec
Global step: 23120 - [===> ------------------------------------------------------- ] 13% - acc: 0.9297 - loss:
1.5314 - 7162.9 sample/sec
Global step: 23130 - [====> ------------------------------------------------------ ] 15% - acc: 0.9297 - loss:
1.5307 - 7174.8 sample/sec
Global step: 23140 - [=====> ---------------------------------------------------- ] 18% - acc: 0.9375 - loss:
1.5231 - 7140.0 sample/sec
Global step: 23150 - [=====> ---------------------------------------------------- ] 20% - acc: 0.9297 - loss:
1.5301 - 7152.8 sample/sec
Global step: 23160 - [======>--------------------------------------------------- ] 23% - acc: 0.9531 - loss:
1.5080 - 7112.3 sample/sec
Global step: 23170 - [=======> ------------------------------------------------- ] 26% - acc: 0.9609 - loss:
1.5000 - 7154.0 sample/sec
Global step: 23180 - [========> ----------------------------------------------- ] 28% - acc: 0.9531 - loss:
1.5074 - 6862.2 sample/sec
Global step: 23190 - [========> ----------------------------------------------- ] 31% - acc: 0.9609 - loss:
1.4993 - 7134.5 sample/sec
Global step: 23200 - [=========>---------------------------------------------- ] 33% - acc: 0.9609 - loss:
1.4995 - 7166.0 sample/sec
Global step: 23210 - [==========> -------------------------------------------- ] 36% - acc: 0.9375 - loss:
1.5231 - 7116.7 sample/sec
Global step: 23220 - [===========> ------------------------------------------ ] 38% - acc: 0.9453 - loss:
1.5153 - 7134.1 sample/sec
Global step: 23230 - [===========> ------------------------------------------ ] 41% - acc: 0.9375 - loss:
1.5233 - 7074.5 sample/sec
Global step: 23240 - [============> ----------------------------------------- ] 43% - acc: 0.9219 - loss:
1.5387 - 7176.9 sample/sec
Global step: 23250 - [=============> --------------------------------------- ] 46% - acc: 0.8828 - loss:
1.5769 - 7144.1 sample/sec
Global step: 23260 - [==============>-------------------------------------- ] 49% - acc: 0.9219 - loss:
1.5383 - 7059.7 sample/sec
Global step: 23270 - [==============>-------------------------------------- ] 51% - acc: 0.8984 - loss:
1.5618 - 6638.6 sample/sec
Global step: 23280 - [===============> ------------------------------------ ] 54% - acc: 0.9453 - loss:
1.5151 - 7035.7 sample/sec
Global step: 23290 - [================> ---------------------------------- ] 56% - acc: 0.9609 - loss:
1.4996 - 7129.0 sample/sec
Global step: 23300 - [=================>--------------------------------- ] 59% - acc: 0.9609 - loss:
1.4997 - 7075.4 sample/sec
Global step: 23310 - [=================>--------------------------------- ] 61% - acc: 0.8750 - loss:
1.5842 - 7117.8 sample/sec
Global step: 23320 - [==================> ------------------------------- ] 64% - acc: 0.9141 - loss:
1.5463 - 7157.2 sample/sec
Global step: 23330 - [===================> ----------------------------- ] 66% - acc: 0.9062 - loss:
1.5549 - 7169.3 sample/sec
Global step: 23340 - [====================> ---------------------------- ] 69% - acc: 0.9219 - loss:
1.5389 - 7164.4 sample/sec
Global step: 23350 - [====================> ---------------------------- ] 72% - acc: 0.9609 - loss:
1.5002 - 7135.4 sample/sec
Global step: 23360 - [=====================> -------------------------- ] 74% - acc: 0.9766 - loss:
1.4842 - 7124.2 sample/sec
Global step: 23370 - [======================> ------------------------ ] 77% - acc: 0.9375 - loss:
1.5231 - 7168.5 sample/sec
Global step: 23380 - [======================> ------------------------ ] 79% - acc: 0.8906 - loss:
1.5695 - 7175.2 sample/sec
Global step: 23390 - [=======================> ----------------------- ] 82% - acc: 0.9375 - loss:
1.5225 - 7132.1 sample/sec
Global step: 23400 - [========================> --------------------- ] 84% - acc: 0.9844 - loss:
1.4768 - 7100.1 sample/sec
Global step: 23410 - [=========================>-------------------- ] 87% - acc: 0.9766 - loss:
1.4840 - 7172.0 sample/sec
Global step: 23420 - [==========================>---] 90% - acc: 0.9062 - loss:
1.5542 - 7122.1 sample/sec
Global step: 23430 - [==========================>---] 92% - acc: 0.9297 - loss:
1.5313 - 7145.3 sample/sec
Global step: 23440 - [===========================>--] 95% - acc: 0.9297 - loss:
1.5301 - 7133.3 sample/sec
Global step: 23450 - [============================>-] 97% - acc: 0.9375 - loss:
1.5231 - 7135.7 sample/sec
Global step: 23460 - [=============================>] 100% - acc: 0.9250 - loss:
1.5362 - 10297.5 sample/sec

Epoch 60 - accuracy: 78.81% (7881/10000)


This epoch receive better accuracy: 78.81 > 78.78. Saving session...
################################################################################# ####

Run Network on Test DataSet:

import numpy as np

import tensorflow as tf
from include.data import get_data_set
from include.model import model
test_x, test_y = get_data_set("test")
x, y, output, y_pred_cls, global_step, learning_rate = model()

_BATCH_SIZE = 128
_CLASS_SIZE = 10
_SAVE_PATH = "./tensorboard/cifar-10-v1.0.0/"
saver = tf.train.Saver()
sess = tf.Session()

try:
print("
Trying to restore last checkpoint ...")
last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=_SAVE_PATH)
saver.restore(sess, save_path=last_chk_path)
print("Restored checkpoint from:", last_chk_path)
except ValueError:
print("
Failed to restore checkpoint. Initializing variables instead.")
sess.run(tf.global_variables_initializer())
def main():
i=0
predicted_class = np.zeros(shape=len(test_x), dtype=np.int)
while i < len(test_x):
j = min(i + _BATCH_SIZE, len(test_x))
batch_xs = test_x[i:j, :]
batch_ys = test_y[i:j, :]
predicted_class[i:j] = sess.run(y_pred_cls, feed_dict={x: batch_xs, y:
batch_ys})
i=j
correct = (np.argmax(test_y, axis=1) == predicted_class)
acc = correct.mean() * 100
correct_numbers = correct.sum()
print()
print("Accuracy on Test-Set: {0:.2f}% ({1} / {2})".format(acc,
correct_numbers, len(test_x)))

if name == " main ":


main()

sess.close()
Simple output:

Trying to restore last checkpoint ...


Restored checkpoint from: ./tensorboard/cifar-10-v1.0.0/-23460

Accuracy on Test-Set: 78.81% (7881 / 10000)


Training Time

Here you can see how much time takes 60 epoch:

Device Batch Size Time Accuracy [%]


NVidia GTX 1070 128 8m 4s 79.12
Intel i7 7700HQ 128 3h 30m 78.91

You might also like