K-Nearest Neighbor On Python Ken Ocuma
K-Nearest Neighbor On Python Ken Ocuma
BSCS 3-A
In pattern recognition, the k-nearest neighbors algorithm (k-
NN) is a non-parametric method used for classification and
regression. In both cases, the input consists of the k closest
training examples in the feature space. The output depends on
whether k-NN is used for classification or regression:
-In k-NN classification, the output is a class membership. An object is classified
by a majority vote of its neighbors, with the object being assigned to the class
most common among its k nearest neighbors (k is a positive integer, typically
small). If k = 1, then the object is simply assigned to the class of that single
nearest neighbor.
-In k-NN regression, the output is the property value for the object. This value
is the average of the values of its k nearest neighbors.
k-NN is a type of instance-based learning, or lazy learning, where the
function is only approximated locally and all computation is deferred
until classification. The k-NN algorithm is among the simplest of all
machine learning algorithms.
# Feature Scaling
Next we fit our classifier to our training set and create our
confusion matrix. Finally we visualise our results.
# Data Preprocessing
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)