Aiml Lab
Aiml Lab
Output
For training instance no 0 the hypothesis is ['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same']
For training instance no 1 the hypothesis is ['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For training instance no 2 the hypothesis is ['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For training instance no 3 the hypothesis is ['Sunny', 'Warm', '?', 'Strong', '?', '?']
The maximally specific hypothesis is ['Sunny', 'Warm', '?', 'Strong', '?', '?']
2. Implement and demonstrate the Candidate-Elimination algorithm to
output a description of the set of all hypothesis consistent with the training
examples.
Output
For instance 1 the hypothesis is S['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same']
For instance 1 the hypotheis is G['?', '?', '?', '?', '?', '?']
For instance 2 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For instance 2 the hypotheis is G['?', '?', '?', '?', '?', '?']
For instance 3 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For instance 3 the hypotheis is G['?', '?', '?', '?', '?', '?']
For Instances 3 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', 'Warm', 'Same']
For Instances 3 the hypothesis is G[['Sunny', '?', '?', '?', '?', '?'], ['?', 'Warm', '?', '?', '?', '?'], ['?', '?', '?', '?', '?',
'Same']]
For instance 4 the hypothesis is S['Sunny', 'Warm', '?', 'Strong', '?', '?']
For instance 4 the hypotheis is G[['Sunny', '?', '?', '?', '?', '?'], ['?', 'Warm', '?', '?', '?', '?']]
3. Write a program to demonstrate the working of the decision tree based ID3
algorithms. Use an appropriate data set for building the decision tree and
apply this knowledge to classify a new sample.
Output
{'Outlook': {'Overcast': 'Yes', 'Rain': {'wind': {'Strong': 'No', 'Weak': 'Yes'}}, 'Sunny': {'Humidity': {'High': 'No',
'Normal': 'Yes'}}}}
4. Implement the concept of Random Forest.
Output
Accuracy : 1.0
5. Build an Artificial Neural Network by implementing the Back propagation
algorithm and test the same using appropriate data sets.
Output
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.89320245]
[0.88108734]
[0.89590999]]
6. Write a program to implement k-Nearest Neighbour algorithm to classify
the iris data set. Print both correct and wrong predictions.
Output
19 setosa setosa
20 versicolor versicolor
[[5.1 3.5 1.4 0.2]
21 versicolor versicolor
[4.9 3. 1.4 0.2]
22 versicolor versicolor
[4.7 3.2 1.3 0.2]
23 setosa setosa
[4.6 3.1 1.5 0.2]
24 virginica virginica
[5. 3.6 1.4 0.2]] [0 0 0 0 0]
25 versicolor versicolor
(150, 4)
26 setosa setosa
105
27 setosa setosa
45
28 versicolor versicolor
Accuracy : 0.9777777777777777
29 virginica virginica
virginica
30 versicolor versicolor
Pridicted actual
31 virginica virginica
0 setosa setosa
32 versicolor versicolor
1 versicolor versicolor
33 virginica virginica
2 versicolor versicolor
34 virginica virginica
3 setosa setosa
35 setosa setosa
4 virginica virginica
36 versicolor versicolor
5 versicolor versicolor
37 setosa setosa
6 virginica virginica
38 versicolor versicolor
7 setosa setosa
39 virginica virginica
8 setosa setosa
40 virginica virginica
9 virginica virginica
41 setosa setosa
10 versicolor versicolor
42 versicolor virginica
11 setosa setosa
43 virginica virginica
12 virginica virginica
44 versicolor versicolor
13 versicolor versicolor
14 versicolor versicolor
15 setosa setosa
16 versicolor versicolor
17 versicolor versicolor
18 setosa setosa
7. Demonstrate the working of SVM Classifier.
Output
Accuracy:80.00%
8. Write a program to implement the Naïve Bayesian Classifier for a sample
training data set stored as a .csv file. Compute the accuracy of the classifier,
considering few test data sets.
Output
DBSCAN Labels: [ 0 0 0 1 1 1 1 1 -1 0 0 0]