AI and ML Lab Manual 2022
AI and ML Lab Manual 2022
S J C INSTITUTE OF TECHNOLOGY
S.J.C.INSTITUTE OF TECHNOLOGY
DEPARTMENT OF INFORMATION SCIENCE & ENGINEERING
CHICKBALLAPUR -562101
During the Year: 2022
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING LABORATORY
(Effective from the academic year 2018 -2019)
Course Learning Objectives: This course (18CSL76) will enable students to:
Implement and evaluate AI and ML algorithms in and Python programming language.
Descriptions (if any):
Installation procedure of the required software must be demonstrated, carried out in groups
and documented in the journal.
Programs List:
1. Implement A* Search algorithm.
2. Implement AO* Search algorithm.
3. For a given set of training data examples stored in a .CSV file, implement and demonstrate the Candidate -
Elimination algorithm to output a description of the set of all hypotheses consistent with the training examples.
4. Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an Appropriate data
set for building the decision tree and apply this knowledge to classify a new sample.
5. Build an Artificial Neural Network by implementing the Back propagation algorithm and test the Same using
appropriate data sets.
6. Write a program to implement the naïve Bayesian classifier for a sample training data set stored as a .CSV file.
Compute the accuracy of the classifier, considering few test data sets.
7. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for clustering using k-
Means algorithm. Compare the results of these two algorithms and comment on the quality of clustering. You can
add Java/Python ML library classes/API in the program.
8. Write a program to implement k-Nearest Neighbor algorithm to classify the iris data set. Print Both correct and
wrong predictions. Java/Python ML library classes can be used for this problem.
9. Implement the non-parametric Locally Weighted Regression algorithm in order to fit data points. Select
appropriate data set for your experiment and draw graphs
PROGRAM.NO.1
Implement A* Search algorithm.
open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {}# parents contains an adjacency map of all nodes
#for each node m,compare its distance from start i.e g(m) to the
#from start through n node
else:
if g[m] > g[n] + weight:
#update g(m)
if n == None:
print('Path does not exist!')
return None
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
else:
return None
#for simplicity we ll consider heuristic distances given
#and this function returns heuristic distance for all nodes
def heuristic(n):
H_dist = {
'A': 11,
'B': 6,
'C': 99,
'D': 1,
'E': 7,
'G': 0,
return H_dist[n]
}
aStarAlgo('A', 'G')
OUTPUT:-
PROGRAM.No.2(AO* Algorithm)
AO* Algorithm
AO* Algorithm basically based on problem decompositon (Breakdown problem into small pieces)
When a problem can be divided into a set of sub problems, where each sub problem can be solved
separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used
for representing the solution.
The decomposition of the problem or problem reduction generates AND arcs.
AND-OR Graph
class Graph:
def __init__(self, graph, heuristicNodeList, startNode): #instantiate graph object with graph topology, heuristic values, start
node
self.graph = graph
self.H=heuristicNodeList
self.start=startNode
self.parent={}
self.status={}
self.solutionGraph={}
def printSolution(self):
print("FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE START NODE:",self.start)
print("------------------------------------------------------------")
print(self.solutionGraph)
print("------------------------------------------------------------")
def computeMinimumCostChildNodes(self, v): # Computes the Minimum Cost of child nodes of a given node v
minimumCost=0
costToChildNodeListDict={}
costToChildNodeListDict[minimumCost]=[]
flag=True
for nodeInfoTupleList in self.getNeighbors(v): # iterate over all the set of child node/s
cost=0
nodeList=[]
for c, weight in nodeInfoTupleList:
cost=cost+self.getHeuristicNodeValue(c)+weight
nodeList.append(c)
if flag==True: # initialize Minimum Cost with the cost of first set of child node/s
minimumCost=cost
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child node/s
flag=False
else: # checking the Minimum Cost nodes with the current Minimum Cost
if minimumCost>cost:
minimumCost=cost
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child node/s
return minimumCost, costToChildNodeListDict[minimumCost] # return Minimum Cost and Minimum Cost child node/s
def aoStar(self, v, backTracking): # AO* algorithm for a start node and backTracking status flag
print("HEURISTIC VALUES :", self.H)
print("SOLUTION GRAPH :", self.solutionGraph)
print("PROCESSING NODE :", v)
print("-----------------------------------------------------------------------------------------")
if self.getStatus(v) >= 0: # if status node v >= 0, compute Minimum Cost nodes of v
minimumCost, childNodeList = self.computeMinimumCostChildNodes(v)
print(minimumCost, childNodeList)
self.setHeuristicNodeValue(v, minimumCost)
self.setStatus(v,len(childNodeList))
solved=True # check the Minimum Cost nodes of v are solved
for childNode in childNodeList:
self.parent[childNode]=v
if self.getStatus(childNode)!=-1:
solved=solved & False
if solved==True: # if the Minimum Cost nodes of v are solved, set the current node status as solved(-1)
self.setStatus(v,-1)
self.solutionGraph[v]=childNodeList # update the solution graph with the solved nodes which may be a part of
solution
if v!=self.start: # check the current node is the start node for backtracking the current node value
self.aoStar(self.parent[v], True) # backtracking the current node value with backtracking status set to true
if backTracking==False: # check the current call is not for backtracking
for childNode in childNodeList: # for each Minimum Cost child node
self.setStatus(childNode,0) # set the status of child node to 0(needs exploration)
self.aoStar(childNode, False) # Minimum Cost child node is further explored with backtracking status as false
Output:
Graph - 1
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-------------------------------------------------------------------------------------
----
10 ['B', 'C']
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : B
-------------------------------------------------------------------------------------
----
6 ['G']
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-------------------------------------------------------------------------------------
----
10 ['B', 'C']
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : G
-------------------------------------------------------------------------------------
----
8 ['I']
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : B
-------------------------------------------------------------------------------------
----
8 ['H']
HEURISTIC VALUES : {'A': 10, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : A
-------------------------------------------------------------------------------------
----
12 ['B', 'C']
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7,
'I': 7, 'J': 1}
SOLUTION GRAPH : {}
PROCESSING NODE : I
-------------------------------------------------------------------------------------
----
0 []
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7,
'I': 0, 'J': 1}
SOLUTION GRAPH : {'I': []}
PROCESSING NODE : G
-------------------------------------------------------------------------------------
----
1 ['I']
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 1}
SOLUTION GRAPH : {'I': [], 'G': ['I']}
PROCESSING NODE : B
-------------------------------------------------------------------------------------
----
2 ['G']
HEURISTIC VALUES : {'A': 12, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 1}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
-------------------------------------------------------------------------------------
----
6 ['B', 'C']
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 1}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : C
-------------------------------------------------------------------------------------
----
2 ['J']
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 1}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
-------------------------------------------------------------------------------------
----
6 ['B', 'C']
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 1}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : J
-------------------------------------------------------------------------------------
----
0 []
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 0}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': []}
PROCESSING NODE : C
-------------------------------------------------------------------------------------
----
1 ['J']
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 1, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7,
'I': 0, 'J': 0}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J']}
PROCESSING NODE : A
-------------------------------------------------------------------------------------
----
5 ['B', 'C']
FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE START NODE: A
------------------------------------------------------------
{'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J'], 'A': ['B', 'C']}
------------------------------------------------------------
import numpy as np
import pandas as pd
data = pd.read_csv('data.csv')
concepts = np.array(data.iloc[:,0:-1])
print("\nInstances are:\n",concepts)
target = np.array(data.iloc[:,-1])
print("\nTarget Values are: ",target)
for i, h in enumerate(concepts):
print("\nInstance", i+1 , "is ", h)
if target[i] == "yes":
print("Instance is Positive ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
specific_h[x] ='?'
general_h[x][x] ='?'
if target[i] == "no":
print("Instance is Negative ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
general_h[x][x] = specific_h[x]
else:
general_h[x][x] = '?'
indices = [i for i, val in enumerate(general_h) if val == ['?', '?', '?', '?', '?', '?']]
for i in indices:
general_h.remove(['?', '?', '?', '?', '?', '?'])
return specific_h, general_h
sky,airtemp,humidity,wind,water,forecast,Enjoysports
sunny,warm,normal,strong,warm,same,yes
sunny,warm,high,strong,warm,same,yes
rainy,cold,high,strong,warm,change,no
sunny,warm,high,strong,cool,change,yes
OUTPUT:-
Instances are:
[['sunny' 'warm' 'normal' 'strong' 'warm' 'same']
['sunny' 'warm' 'high' 'strong' 'warm' 'same']
['rainy' 'cold' 'high' 'strong' 'warm' 'change']
['sunny' 'warm' 'high' 'strong' 'cool' 'change']]
Generic Boundary: [['?', '?', '?', '?', '?', '?'], ['?', '?', '?', '?', '?', '?'], ['?', '?',
'?', '?', '?', '?'], ['?', '?', '?', '?', '?', '?'], ['?', '?', '?', '?', '?', '?'], ['?', '?',
'?', '?', '?', '?']]
Final Specific_h:
['sunny' 'warm' '?' 'strong' '?' '?']
Final General_h:
[['sunny', '?', '?', '?', '?', '?'], ['?', 'warm', '?', '?', '?', '?']
4.Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new sample
import numpy as np
import math
from data_loader import read_data
class Node:
def __init__(self, attribute):
self.attribute = attribute
self.children = []
self.answer = ""
# def __str__(self):
# return self.attribute
for x in range(items.shape[0]):
for y in range(data.shape[0]):
if data[y, col] == items[x]:
count[x] += 1
for x in range(items.shape[0]):
dict[items[x]] = np.empty((int(count[x]), data.shape[1]), dtype="|S32")
pos = 0
for y in range(data.shape[0]):
if data[y, col] == items[x]:
dict[items[x]][pos] = data[y]
pos += 1
if delete:
dict[items[x]] = np.delete(dict[items[x]], col, 1)
def entropy(S):
items = np.unique(S)
if items.size == 1:
return 0
for x in range(items.shape[0]):
counts[x] = sum(S == items[x]) / (S.size * 1.0)
total_size = data.shape[0]
entropies = np.zeros((items.shape[0], 1))
intrinsic = np.zeros((items.shape[0], 1))
for x in range(items.shape[0]):
ratio = dict[items[x]].shape[0] / (total_size * 1.0)
entropies[x] = ratio * entropy(dict[items[x]][:, -1])
intrinsic[x] = ratio * math.log(ratio, 2)
for x in range(entropies.shape[0]):
total_entropy -= entropies[x]
return total_entropy / iv
split = np.argmax(gains)
node = Node(metadata[split])
metadata = np.delete(metadata, split, 0)
for x in range(items.shape[0]):
child = create_node(dict[items[x]], metadata)
node.children.append((items[x], child))
return node
def empty(size):
s = ""
for x in range(size):
s += " "
return s
print(empty(level), node.attribute)
def read_data(filename):
with open(filename, 'r') as csvfile:
datareader = csv.reader(csvfile, delimiter=',')
headers = next(datareader)
metadata = []
traindata = []
for name in headers:
metadata.append(name)
OUTPUT
Outlook
Overcast
b'Yes'
Rainy
Windy
b'FALSE'
b'Yes'
b'TRUE'
b'No'
Sunny
Humidity
b'High'
b'No'
b'Normal'
b'Yes'
5.Build an Artificial Neural Network by implementing the Back propagation algorithm and test the
same using appropriate data sets
import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)
y = np.array(([92], [86], [89]), dtype=float)
X = X/np.amax(X,axis=0) # maximum of X array longitudinally
y = y/100
#Sigmoid Function
def sigmoid (x):
return 1/(1 + np.exp(-x))
#Derivative of Sigmoid Function
def derivatives_sigmoid(x):
return x * (1 - x)
#Variable initialization
epoch=7000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers neurons
output_neurons = 1 #number of neurons at output layer
#weight and bias initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bout=np.random.uniform(size=(1,output_neurons))
#draws a random range of numbers uniformly of dim x*y
for i in range(epoch):
#Forward Propogation
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act)#how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr# dotproduct of nextlayererror and currentlayerop
# bout += np.sum(d_output, axis=0,keepdims=True) *lr
wh += X.T.dot(d_hiddenlayer) *lr
#bh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
print("Input: \n" + str(X))
OUTPUT
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.89429777]
[0.87537484]
[0.90042802]]
6.Write a program to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. Compute the accuracy of the classifier, considering few test data sets.
def loadCsv(filename):
lines = csv.reader(open(filename))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
trainSet.append(copy.pop(i))
return [trainSet, copy]
def separateByClass(dataset):
separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
def mean(numbers):
return safe_div(sum(numbers), float(len(numbers)))
def stdev(numbers):
avg = mean(numbers)
variance = safe_div(sum([pow(x - avg, 2) for x in numbers]), float(len(numbers) - 1))
return math.sqrt(variance)
def summarize(dataset):
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1]
return summaries
def summarizeByClass(dataset):
separated = separateByClass(dataset)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = summarize(instances)
return summaries
def main():
filename = 'Naive.csv'
splitRatio = 0.75
dataset = loadCsv(filename)
trainingSet, testSet = splitDataset(dataset, splitRatio)
print('Split {0} rows into'.format(len(dataset)))
print('Number of Training data: ' + (repr(len(trainingSet))))
print('Number of Test Data: ' + (repr(len(testSet))))
print("\nThe values assumed for the concept learning attributes are\n")
print(
"OUTLOOK=> Sunny=1 Overcast=2 Rain=3\nTEMPERATURE=> Hot=1 Mild=2 Cool=3\nHUMIDITY=>
High=1 Normal=2\nWIND=> Weak=1 Strong=2")
print("TARGET CONCEPT:PLAY TENNIS=> Yes=10 No=5")
main()
INPUT : dataset(Naive.csv)
1,1,1,1,5
1,1,1,2,5
2,1,1,2,10
3,2,1,1,10
3,3,2,1,10
3,3,2,2,5
2,3,2,2,10
1,2,1,1,5
1,3,2,1,10
3,2,2,2,10
1,2,2,2,10
2,2,1,2,10
2,1,2,1,10
3,2,1,2,5
1,2,1,2,10
1,2,1,2,5
OUTPUT
7. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for clustering
using k-Means algorithm. Compare the results of these two algorithms and comment on the quality
of clustering. You can add Java/Python ML library classes/API in the program
X=np.matrix(list(zip(f1,f2)))
plt.plot()
plt.xlim([0, 100])
plt.ylim([0, 50])
plt.title('Dataset')
plt.ylabel('Speeding_Feature')
plt.xlabel('Distance_Feature')
plt.scatter(f1,f2)
plt.show()
# KMeans algorithm
#K = 3
kmeans_model = KMeans(n_clusters=3).fit(X)
plt.plot()
for i, l in enumerate(kmeans_model.labels_):
plt.plot(f1[i], f2[i], color=colors[l], marker=markers[l],ls='None')
plt.xlim([0, 100])
plt.ylim([0, 50])
plt.show()
Driver_ID,Distance_Feature,Speeding_Feature
3423311935,71.24,28
3423313212,52.53,25
3423313724,64.54,27
3423311373,55.69,22
3423310999,54.58,25
3423313857,41.91,10
3423312432,58.64,20
3423311434,52.02,8
3423311328,31.25,34
3423312488,44.31,19
3423311254,49.35,40
3423312943,58.07,45
3423312536,44.22,22
3423311542,55.73,19
3423312176,46.63,43
3423314176,52.97,32
3423314202,46.25,35
3423311346,51.55,27
3423310666,57.05,26
3423313527,58.45,30
3423312182,43.42,23
3423313590,55.68,37
3423312268,55.15,18
OUTPUT
Driver_IDDistance_FeatureSpeeding_Feature
0 3423311935 71.24 28
1 3423313212 52.53 25
2 3423313724 64.54 27
3 3423311373 55.69 22
4 3423310999 54.58 25
5 3423313857 41.91 10
6 3423312432 58.64 20
7 3423311434 52.02 8
8 3423311328 31.25 34
9 3423312488 44.31 19
10 3423311254 49.35 40
11 3423312943 58.07 45
12 3423312536 44.22 22
13 3423311542 55.73 19
14 3423312176 46.63 43
15 3423314176 52.97 32
16 3423314202 46.25 35
17 3423311346 51.55 27
18 3423310666 57.05 26
19 3423313527 58.45 30
20 3423312182 43.42 23
21 3423313590 55.68 37
22 3423312268 55.15 18
8. Write a program to implement k-Nearest Neighbour algorithm to classify the iris data set. Print
both correct and wrong predictions. Java/Python ML library classes can be used for this problem.
KNN ALGORITHM
classifier=KNeighborsClassifier(n_neighbors=5)
classifier.fit(x_train,y_train)
y_pred=classifier.predict(x_test)
print('confusion matrix is as follows')
print(confusion_matrix(y_test,y_pred))
print('Accuracy metrics')
print(classification_report(y_test,y_pred))
INPUT
5.1,3.5,1.4,0.2,Iris-setosa,
4.9,3,1.4,0.2,Iris-setosa,
4.7,3.2,1.3,0.2,Iris-setosa,
4.6,3.1,1.5,0.2,Iris-setosa,
5,3.6,1.4,0.2,Iris-setosa,
5.4,3.9,1.7,0.4,Iris-setosa,
4.6,3.4,1.4,0.3,Iris-setosa,
5,3.4,1.5,0.2,Iris-setosa,
4.4,2.9,1.4,0.2,Iris-setosa,
4.9,3.1,1.5,0.1,Iris-setosa,
5.4,3.7,1.5,0.2,Iris-setosa,
4.8,3.4,1.6,0.2,Iris-setosa,
4.8,3,1.4,0.1,Iris-setosa,
4.3,3,1.1,0.1,Iris-setosa,
5.8,4,1.2,0.2,Iris-setosa,
5.7,4.4,1.5,0.4,Iris-setosa,
5.4,3.9,1.3,0.4,Iris-setosa,
5.1,3.5,1.4,0.3,Iris-setosa,
5.7,3.8,1.7,0.3,Iris-setosa,
5.1,3.8,1.5,0.3,Iris-setosa,
5.4,3.4,1.7,0.2,Iris-setosa,
5.1,3.7,1.5,0.4,Iris-setosa,
4.6,3.6,1,0.2,Iris-setosa,
5.1,3.3,1.7,0.5,Iris-setosa,
4.8,3.4,1.9,0.2,Iris-setosa,
5,3,1.6,0.2,Iris-setosa,
5,3.4,1.6,0.4,Iris-setosa,
5.2,3.5,1.5,0.2,Iris-setosa,
5.2,3.4,1.4,0.2,Iris-setosa,
4.7,3.2,1.6,0.2,Iris-setosa,
4.8,3.1,1.6,0.2,Iris-setosa,
5.4,3.4,1.5,0.4,Iris-setosa,
5.2,4.1,1.5,0.1,Iris-setosa,
5.5,4.2,1.4,0.2,Iris-setosa,
4.9,3.1,1.5,0.1,Iris-setosa,
5,3.2,1.2,0.2,Iris-setosa,
5.5,3.5,1.3,0.2,Iris-setosa,
4.9,3.1,1.5,0.1,Iris-setosa,
4.4,3,1.3,0.2,Iris-setosa,
5.1,3.4,1.5,0.2,Iris-setosa,
5,3.5,1.3,0.3,Iris-setosa,
4.5,2.3,1.3,0.3,Iris-setosa,
4.4,3.2,1.3,0.2,Iris-setosa,
5,3.5,1.6,0.6,Iris-setosa,
5.1,3.8,1.9,0.4,Iris-setosa,
4.8,3,1.4,0.3,Iris-setosa,
5.1,3.8,1.6,0.2,Iris-setosa,
4.6,3.2,1.4,0.2,Iris-setosa,
5.3,3.7,1.5,0.2,Iris-setosa,
5,3.3,1.4,0.2,Iris-setosa,
7,3.2,4.7,1.4,Iris-versicolor,
6.4,3.2,4.5,1.5,Iris-versicolor,
6.9,3.1,4.9,1.5,Iris-versicolor,
5.5,2.3,4,1.3,Iris-versicolor,
6.5,2.8,4.6,1.5,Iris-versicolor,
5.7,2.8,4.5,1.3,Iris-versicolor,
6.3,3.3,4.7,1.6,Iris-versicolor,
4.9,2.4,3.3,1,Iris-versicolor,
6.6,2.9,4.6,1.3,Iris-versicolor,
5.2,2.7,3.9,1.4,Iris-versicolor,
5,2,3.5,1,Iris-versicolor,
5.9,3,4.2,1.5,Iris-versicolor,
6,2.2,4,1,Iris-versicolor,
6.1,2.9,4.7,1.4,Iris-versicolor,
5.6,2.9,3.6,1.3,Iris-versicolor,
6.7,3.1,4.4,1.4,Iris-versicolor,
5.6,3,4.5,1.5,Iris-versicolor,
5.8,2.7,4.1,1,Iris-versicolor,
6.2,2.2,4.5,1.5,Iris-versicolor,
5.6,2.5,3.9,1.1,Iris-versicolor,
5.9,3.2,4.8,1.8,Iris-versicolor,
6.1,2.8,4,1.3,Iris-versicolor,
6.3,2.5,4.9,1.5,Iris-versicolor,
6.1,2.8,4.7,1.2,Iris-versicolor,
6.4,2.9,4.3,1.3,Iris-versicolor,
6.6,3,4.4,1.4,Iris-versicolor,
6.8,2.8,4.8,1.4,Iris-versicolor,
6.7,3,5,1.7,Iris-versicolor,
6,2.9,4.5,1.5,Iris-versicolor,
5.7,2.6,3.5,1,Iris-versicolor,
5.5,2.4,3.8,1.1,Iris-versicolor,
5.5,2.4,3.7,1,Iris-versicolor,
5.8,2.7,3.9,1.2,Iris-versicolor,
6,2.7,5.1,1.6,Iris-versicolor,
5.4,3,4.5,1.5,Iris-versicolor,
6,3.4,4.5,1.6,Iris-versicolor,
6.7,3.1,4.7,1.5,Iris-versicolor,
6.3,2.3,4.4,1.3,Iris-versicolor,
5.6,3,4.1,1.3,Iris-versicolor,
5.5,2.5,4,1.3,Iris-versicolor,
5.5,2.6,4.4,1.2,Iris-versicolor,
6.1,3,4.6,1.4,Iris-versicolor,
5.8,2.6,4,1.2,Iris-versicolor,
5,2.3,3.3,1,Iris-versicolor,
5.6,2.7,4.2,1.3,Iris-versicolor,
5.7,3,4.2,1.2,Iris-versicolor,
5.7,2.9,4.2,1.3,Iris-versicolor,
6.2,2.9,4.3,1.3,Iris-versicolor,
5.1,2.5,3,1.1,Iris-versicolor,
5.7,2.8,4.1,1.3,Iris-versicolor,
6.3,3.3,6,2.5,Iris-virginica,
5.8,2.7,5.1,1.9,Iris-virginica,
7.1,3,5.9,2.1,Iris-virginica,
6.3,2.9,5.6,1.8,Iris-virginica,
6.5,3,5.8,2.2,Iris-virginica,
7.6,3,6.6,2.1,Iris-virginica,
4.9,2.5,4.5,1.7,Iris-virginica,
7.3,2.9,6.3,1.8,Iris-virginica,
6.7,2.5,5.8,1.8,Iris-virginica,
7.2,3.6,6.1,2.5,Iris-virginica,
6.5,3.2,5.1,2,Iris-virginica,
6.4,2.7,5.3,1.9,Iris-virginica,
6.8,3,5.5,2.1,Iris-virginica,
5.7,2.5,5,2,Iris-virginica,
5.8,2.8,5.1,2.4,Iris-virginica,
6.4,3.2,5.3,2.3,Iris-virginica,
6.5,3,5.5,1.8,Iris-virginica,
7.7,3.8,6.7,2.2,Iris-virginica,
7.7,2.6,6.9,2.3,Iris-virginica,
6,2.2,5,1.5,Iris-virginica,
6.9,3.2,5.7,2.3,Iris-virginica,
5.6,2.8,4.9,2,Iris-virginica,
7.7,2.8,6.7,2,Iris-virginica,
6.3,2.7,4.9,1.8,Iris-virginica,
6.7,3.3,5.7,2.1,Iris-virginica,
7.2,3.2,6,1.8,Iris-virginica,
6.2,2.8,4.8,1.8,Iris-virginica,
6.1,3,4.9,1.8,Iris-virginica,
6.4,2.8,5.6,2.1,Iris-virginica,
7.2,3,5.8,1.6,Iris-virginica,
7.4,2.8,6.1,1.9,Iris-virginica,
7.9,3.8,6.4,2,Iris-virginica,
6.4,2.8,5.6,2.2,Iris-virginica,
6.3,2.8,5.1,1.5,Iris-virginica,
6.1,2.6,5.6,1.4,Iris-virginica,
7.7,3,6.1,2.3,Iris-virginica,
6.3,3.4,5.6,2.4,Iris-virginica,
6.4,3.1,5.5,1.8,Iris-virginica,
6,3,4.8,1.8,Iris-virginica,
6.9,3.1,5.4,2.1,Iris-virginica,
6.7,3.1,5.6,2.4,Iris-virginica,
6.9,3.1,5.1,2.3,Iris-virginica,
5.8,2.7,5.1,1.9,Iris-virginica,
6.8,3.2,5.9,2.3,Iris-virginica,
6.7,3.3,5.7,2.5,Iris-virginica,
6.7,3,5.2,2.3,Iris-virginica,
6.3,2.5,5,1.9,Iris-virginica,
6.5,3,5.2,2,Iris-virginica,
6.2,3.4,5.4,2.3,Iris-virginica,
5.9,3,5.1,1.8,Iris-virginica,
OUTPUT
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0000000000000111111111111111111111111
1111111111111111111111111122222222222
2222222222222222222222222222222222222
2 2]
confusion matrix is as follows
[[21 0 0]
[ 0 11 2]
[ 0 0 11]]
Accuracy metrics
precision recall f1-score support
9. Implement the non-parametric Locally Weighted Regression algorithm in order to fit data points.
Select appropriate data set for your experiment and draw graphs.
LOCALLY WEIGHTED REGRESSION
residuals = y - yest
s = np.median(np.abs(residuals))
delta = np.clip(residuals / (6.0 * s), -1, 1)
delta = (1 - delta**2)**2
return yest
if __name__ == '__main__':
import math
n = 100
x = np.linspace(0, 2 * math.pi, n)
print("==========================values of x=====================")
print(x)
y = np.sin(x) + 0.3*np.random.randn(n)
print("================================Values of y===================")
print(y)
f = 0.25
yest = lowess(x, y, f=f, iter=3)
import pylab as pl
pl.clf()
pl.plot(x, y, label='y noisy')
pl.plot(x, yest, label='y pred')
pl.legend()
pl.show()
OUTPUT
==========================values of x=====================
[0. 0.06346652 0.12693304 0.19039955 0.25386607 0.31733259
0.38079911 0.44426563 0.50773215 0.57119866 0.63466518 0.6981317
0.76159822 0.82506474 0.88853126 0.95199777 1.01546429 1.07893081
1.14239733 1.20586385 1.26933037 1.33279688 1.3962634 1.45972992