18AIL78 - Lab Manual
18AIL78 - Lab Manual
1. Write a program to implement k-Nearest Neighbour algorithm to classify the iris dataset. Print
both correct and wrong predictions.
(MARSHALL%[email protected])\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir
R.A. Fisher. The dataset is taken\nfrom Fisher\'s p aper. Note that it\'s the same as in R, but not as in the
UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known
database to be found in the\npattern recognition literature. Fi sher\'s paper is a classic in the field and\nis
referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50
instances each, where each class refers to a\ntype of iris plant.
One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each
other.\n\n.. topic:: References\n\n - Fisher, R.A. "The use of multiple measurements in taxonomic
problems"\n Annual E ugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n Mathematical
Statistics" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene
Analysis.\n (Q327.D83) John Wi ley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V.
(1980) "Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for
Recognition in Partially Exposed\n Environments". I EEE Transactions on Pattern Analysis and
Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) "The Reduced Nearest
Neighbor Rule". IEEE Transactions\n on Information Theory, May 1972, 431-43
3.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II\n conceptual
clustering system finds 3 classes in the data.\n - Many, many more ...', 'feature_names': ['sepal length
(cm)',
'sepal width (cm)',
'petal length (cm)', 'petal width (cm)'],
'filename': 'iris.csv',
'data_module': 'sklearn.datasets.data'}
In [3] print("\nIRIS DATASET",iris_dataset.target_names)
Out:IRIS DATASET ['setosa' 'versicolor' 'virginica']
2. Develop a program to apply K-means algorithm to cluster a set of data stored in .CSV file. Use
the same data set for clustering using EM algorithm. Compare the results of these two algorithms
and comment on the quality of clustering.
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
3. Implement the non-parametric Locally Weighted Regression algorithm in order to fit data
points. Select appropriate data set for your experiment and draw graphs
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
4. Build an Artificial Neural Network by implementing the Back propagation algorithm and test
the same using appropriate data sets.
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
5. Demonstrate Genetic algorithm by taking a suitable data for any simple application.
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
\
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
.
DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING