0% found this document useful (0 votes)
2 views12 pages

K-Nearest Neighbor (KNN)

The K-Nearest Neighbor (KNN) algorithm is a simple supervised learning technique used for classification and regression, primarily classifying new data points based on their similarity to existing data. It operates by storing all available data and calculating the Euclidean distance to determine the nearest neighbors, with the most common category among them being assigned to the new data. While KNN is easy to implement and robust to noise, it requires careful selection of the parameter K and can be computationally intensive.

Uploaded by

zzvnqe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views12 pages

K-Nearest Neighbor (KNN)

The K-Nearest Neighbor (KNN) algorithm is a simple supervised learning technique used for classification and regression, primarily classifying new data points based on their similarity to existing data. It operates by storing all available data and calculating the Euclidean distance to determine the nearest neighbors, with the most common category among them being assigned to the new data. While KNN is easy to implement and robust to noise, it requires careful selection of the parameter K and can be computationally intensive.

Uploaded by

zzvnqe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

K-Nearest Neighbor(KNN)

K-Nearest Neighbor(KNN) Algorithm for Machine Learning

● K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised
Learning technique.
● K-NN algorithm assumes the similarity between the new case/data and available cases and put
the new case into the category that is most similar to the available categories.
● K-NN algorithm stores all the available data and classifies a new data point based on the
similarity. This means when new data appears then it can be easily classified into a well suite
category by using K- NN algorithm.
● K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for
the Classification problems.
● K-NN is a non-parametric algorithm, which means it does not make any
assumption on underlying data.
● It is also called a lazy learner algorithm because it does not learn from the training
set immediately instead it stores the dataset and at the time of classification, it
performs an action on the dataset.
● KNN algorithm at the training phase just stores the dataset and when it gets new
data, then it classifies that data into a category that is much similar to the new
data.
Need of K-NN Algorithm
Working of KNN

● Step-1: Select the number K of the neighbors


● Step-2: Calculate the Euclidean distance of K number of neighbors
● Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
● Step-4: Among these k neighbors, count the number of the data points in each
category.
● Step-5: Assign the new data points to that category for which the number of the
neighbor is maximum.
● Step-6: Our model is ready.
How to select the value of K in the K-NN Algorithm?

● There is no particular way to determine the best value for "K", so we need to
try some values to find the best out of them. The most preferred value for K is
5.
● A very low value for K such as K=1 or K=2, can be noisy and lead to the effects
of outliers in the model.
● Large values for K are good, but it may find some difficulties.
Advantages of KNN Algorithm:
● It is simple to implement.
● It is robust to the noisy training data
● It can be more effective if the training data is large.

Disadvantages of KNN Algorithm:

● Always needs to determine the value of K which may be complex some time.
● The computation cost is high because of calculating the distance between the
data points for all the training samples.
Steps to implement the K-NN algorithm:

● Data Pre-processing step


● Fitting the K-NN algorithm to the Training set
● Predicting the test result
● Test accuracy of the result(Creation of Confusion matrix)
● Visualizing the test set result.

You might also like