0% found this document useful (0 votes)
8 views24 pages

Aiml Lab

AIML_LAB (1)AIML_LAB (1)AIML_LAB (1)

Uploaded by

hayathdk8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views24 pages

Aiml Lab

AIML_LAB (1)AIML_LAB (1)AIML_LAB (1)

Uploaded by

hayathdk8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1. Implement Find S algorithm.

Output

The given trraining data set

['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']


['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same', 'Yes']
['Rainy', 'Cold', 'High', 'Strong', 'Warm', 'Change', 'No']
['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change', 'Yes']
The initial values of hypotehsis
['0', '0', '0', '0', '0', '0']

For training instance no 0 the hypothesis is ['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same']

For training instance no 1 the hypothesis is ['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']

For training instance no 2 the hypothesis is ['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']

For training instance no 3 the hypothesis is ['Sunny', 'Warm', '?', 'Strong', '?', '?']
The maximally specific hypothesis is ['Sunny', 'Warm', '?', 'Strong', '?', '?']
2. Implement and demonstrate the Candidate-Elimination algorithm to
output a description of the set of all hypothesis consistent with the training
examples.
Output

['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']


['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same', 'Yes']
['Rainy', 'Cold', 'High', 'Strong', 'Warm', 'Change', 'No']
['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change', 'Yes']
Initial hypothesis is
The most specific ['0', '0', '0', '0', '0', '0']
The most general ['?', '?', '?', '?', '?', '?']
The candidate algorithm

For instance 1 the hypothesis is S['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same']
For instance 1 the hypotheis is G['?', '?', '?', '?', '?', '?']
For instance 2 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For instance 2 the hypotheis is G['?', '?', '?', '?', '?', '?']
For instance 3 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For instance 3 the hypotheis is G['?', '?', '?', '?', '?', '?']
For Instances 3 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For Instances 3 the hypothesis is G[['Sunny', '?', '?', '?', '?', '?'], ['?', 'Warm', '?', '?', '?', '?'], ['?', '?', '?', '?', '?',
'Same']]
For instance 4 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', '?', '?']
For instance 4 the hypotheis is G[['Sunny', '?', '?', '?', '?', '?'], ['?', 'Warm', '?', '?', '?', '?']]
3. Write a program to demonstrate the working of the decision tree based ID3
algorithms. Use an appropriate data set for building the decision tree and
apply this knowledge to classify a new sample.
Output

Given Play Tennis Data Set :

Play Tennis Outlook Temperature Humidity wind


0 No Sunny Hot High Weak
1 No Sunny Hot High Strong
2 Yes Overcast Hot High Weak
3 Yes Rain Mild High Weak
4 Yes Rain Cold Normal Weak
5 No Rain Cold Normal Strong
6 Yes Overcast Cold Normal Strong
7 No Sunny Mild High Weak
8 Yes Sunny Cold Normal Weak
9 Yes Rain Mild Normal Weak
10 Yes Sunny Mild Normal Strong
11 Yes Overcast Mild High Strong
12 Yes Overcast Hot Normal Weak
13 No Rain Mild High Strong
List of attributes : ['Play Tennis', 'Outlook', 'Temperature', 'Humidity', 'wind']
Predicting attributes : ['Outlook', 'Temperature', 'Humidity', 'wind']
Gain= [0.2467498197744391, 0.029222565658954647, 0.15183550136234136, 0.04812703040826927]
Best attribute : Outlook
Gain= [0.01997309402197489, 0.01997309402197489, 0.9709505944546686]
Best attribute : wind
Gain= [0.5709505944546686, 0.9709505944546686, 0.01997309402197489]
Best attribute : Humidity

The Resultant Decision Tree is

{'Outlook': {'Overcast': 'Yes', 'Rain': {'wind': {'Strong': 'No', 'Weak': 'Yes'}}, 'Sunny': {'Humidity': {'High': 'No',
'Normal': 'Yes'}}}}
4. Implement the concept of Random Forest.
Output

Accuracy : 1.0
5. Build an Artificial Neural Network by implementing the Back propagation
algorithm and test the same using appropriate data sets.
Output

Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.89320245]
[0.88108734]
[0.89590999]]
6. Write a program to implement k-Nearest Neighbour algorithm to classify
the iris data set. Print both correct and wrong predictions.
Output
19 setosa setosa
20 versicolor versicolor
[[5.1 3.5 1.4 0.2]
21 versicolor versicolor
[4.9 3. 1.4 0.2]
22 versicolor versicolor
[4.7 3.2 1.3 0.2]
23 setosa setosa
[4.6 3.1 1.5 0.2]
24 virginica virginica
[5. 3.6 1.4 0.2]] [0 0 0 0 0]
25 versicolor versicolor
(150, 4)
26 setosa setosa
105
27 setosa setosa
45
28 versicolor versicolor
Accuracy : 0.9777777777777777
29 virginica virginica
virginica
30 versicolor versicolor
Pridicted actual
31 virginica virginica
0 setosa setosa
32 versicolor versicolor
1 versicolor versicolor
33 virginica virginica
2 versicolor versicolor
34 virginica virginica
3 setosa setosa
35 setosa setosa
4 virginica virginica
36 versicolor versicolor
5 versicolor versicolor
37 setosa setosa
6 virginica virginica
38 versicolor versicolor
7 setosa setosa
39 virginica virginica
8 setosa setosa
40 virginica virginica
9 virginica virginica
41 setosa setosa
10 versicolor versicolor
42 versicolor virginica
11 setosa setosa
43 virginica virginica
12 virginica virginica
44 versicolor versicolor
13 versicolor versicolor
14 versicolor versicolor
15 setosa setosa
16 versicolor versicolor
17 versicolor versicolor
18 setosa setosa
7. Demonstrate the working of SVM Classifier.
Output

Accuracy:80.00%
8. Write a program to implement the Naïve Bayesian Classifier for a sample
training data set stored as a .csv file. Compute the accuracy of the classifier,
considering few test data sets.
Output

Size of dataset is: 768


537
{0: [(3.169971671388102, 2.968490634467706), (110.09631728045326, 24.9438417666739),
(68.51841359773371, 16.964211184506688), (20.339943342776206, 14.964276286192664),
(75.90084985835693, 108.60669149171099), (30.325495750708214, 7.679438777553482),
(0.44276770538243626, 0.31251627662993847), (31.192634560906516, 11.626881671986274)], 1:
[(4.706521739130435, 3.822676210374205), (141.8586956521739, 31.727694379502378),
(69.57608695652173, 22.829725467738065), (21.945652173913043, 17.350368340169556),
(106.78804347826087, 147.7336388673267), (35.544021739130436, 7.241265106958286),
(0.5269347826086956, 0.357688199669022), (36.65217391304348, 11.060128540650384)]}
Accuracy: 74.02597402597402
9. Apply EM algorithm to cluster a set of data. Use the same data set for
clustering using k-Means algorithm. Compare the results of these two
algorithms and comment on the quality of clustering.
Output [5.4 3.4] [6.3 2.5] [6.8 3. ]
[5.2 4.1] [6.1 2.8] [5.7 2.5]
Input Data and Shape [5.5 4.2] [6.4 2.9] [5.8 2.8]
(150, 2) [4.9 3.1] [6.6 3. ] [6.4 3.2]
V1 V2 [5. 3.2] [6.8 2.8] [6.5 3. ]
0 5.1 3.5 [5.5 3.5] [6.7 3. ] [7.7 3.8]
1 4.9 3.0 [4.9 3.1] [6. 2.9] [7.7 2.6]
2 4.7 3.2 [4.4 3. ] [5.7 2.6] [6. 2.2]
3 4.6 3.1 [5.1 3.4] [5.5 2.4] [6.9 3.2]
4 5.0 3.6 [5. 3.5] [5.5 2.4] [5.6 2.8]
X [[5.1 3.5] [4.5 2.3] [5.8 2.7] [7.7 2.8]
[4.9 3. ] [4.4 3.2] [6. 2.7] [6.3 2.7]
[4.7 3.2] [5. 3.5] [5.4 3. ] [6.7 3.3]
[4.6 3.1] [5.1 3.8] [6. 3.4] [7.2 3.2]
[5. 3.6] [4.8 3. ] [6.7 3.1] [6.2 2.8]
[5.4 3.9] [5.1 3.8] [6.3 2.3] [6.1 3. ]
[4.6 3.4] [4.6 3.2] [5.6 3. ] [6.4 2.8]
[5. 3.4] [5.3 3.7] [5.5 2.5] [7.2 3. ]
[4.4 2.9] [5. 3.3] [5.5 2.6] [7.4 2.8]
[4.9 3.1] [7. 3.2] [6.1 3. ] [7.9 3.8]
[5.4 3.7] [6.4 3.2] [5.8 2.6] [6.4 2.8]
[4.8 3.4] [6.9 3.1] [5. 2.3] [6.3 2.8]
[4.8 3. ] [5.5 2.3] [5.6 2.7] [6.1 2.6]
[4.3 3. ] [6.5 2.8] [5.7 3. ] [7.7 3. ]
[5.8 4. ] [5.7 2.8] [5.7 2.9] [6.3 3.4]
[5.7 4.4] [6.3 3.3] [6.2 2.9] [6.4 3.1]
[5.4 3.9] [4.9 2.4] [5.1 2.5] [6. 3. ]
[5.1 3.5] [6.6 2.9] [5.7 2.8] [6.9 3.1]
[5.7 3.8] [5.2 2.7] [6.3 3.3] [6.7 3.1]
[5.1 3.8] [5. 2. ] [5.8 2.7] [6.9 3.1]
[5.4 3.4] [5.9 3. ] [7.1 3. ] [5.8 2.7]
[5.1 3.7] [6. 2.2] [6.3 2.9] [6.8 3.2]
[4.6 3.6] [6.1 2.9] [6.5 3. ] [6.7 3.3]
[5.1 3.3] [5.6 2.9] [7.6 3. ] [6.7 3. ]
[4.8 3.4] [6.7 3.1] [4.9 2.5] [6.3 2.5]
[5. 3. ] [5.6 3. ] [7.3 2.9] [6.5 3. ]
[5. 3.4] [5.8 2.7] [6.7 2.5] [6.2 3.4]
[5.2 3.5] [6.2 2.2] [7.2 3.6] [5.9 3. ]]
[5.2 3.4] [5.6 2.5] [6.5 3.2]
[4.7 3.2] [5.9 3.2] [6.4 2.7]
[4.8 3.1] [6.1 2.8]
Graph for whole dataset

Centroids [[5.425 3.5 ]


[7.16666667 3.15 ]
[5.54285714 2.45714286]
[6.17142857 2.35714286]
[6.48 2.84 ]
[4.68571429 3.34285714]
[7.8 3.8 ]
[5.52857143 4.04285714]
[6.05 2.94166667]
[5.01666667 2.4 ]
[4.42 3.04 ]
[5.14 3.76 ]
[7.62 2.84 ]
[4.88888889 3.06666667]
[6.31111111 3.27777778]
[5.6 2.94285714]
[6.77692308 3.12307692]
[5.77272727 2.70909091]
[4.5 2.3 ]
[5.06666667 3.44166667]]
Graph using Kmeans Algorithm

Graph using EM Algorithm


10. Demonstrate the working of Density Based Clustering (DB Scan Algorithm)
Output

DBSCAN Labels: [ 0 0 0 1 1 1 1 1 -1 0 0 0]

You might also like