In pattern recognition, the k-nearest neighbor algorithm (k-NN) is a method for classifying objects based on closest training examples in the feature space. k-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification. The k-nearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor.

The same method can be used for regression, by simply assigning the property value for the object to be the average of the values of its k nearest neighbors. It can be useful to weight the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. (A common weighting scheme is to give each neighbor a weight of 1/d, where d is the distance to the neighbor. This scheme is a generalization of linear interpolation.)

The neighbors are taken from a set of objects for which the correct classification (or, in the case of regression, the value of the property) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. The k-nearest neighbor algorithm is sensitive to the local structure of the data

Nearest neighbor rules in effect compute the decision boundary in an implicit manner. It is also possible to compute the decision boundary itself explicitly, and to do so in an efficient manner so that the computational complexity is a function of the boundary complexity.[1]

Contents

Algorithm [link]

Example of k-NN classification. The test sample (green circle) should be classified either to the first class of blue squares or to the second class of red triangles. If k = 3 it is assigned to the second class because there are 2 triangles and only 1 square inside the inner circle. If k = 5 it is assigned to the first class (3 squares vs. 2 triangles inside the outer circle).

The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples.

In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point.

Usually Euclidean distance is used as the distance metric; however this is only applicable to continuous variables. In cases such as text classification, another metric such as the overlap metric (or Hamming distance) can be used. Often, the classification accuracy of "k"-NN can be improved significantly if the distance metric is learned with specialized algorithms such as Large Margin Nearest Neighbor or Neighbourhood components analysis.

A drawback to the basic "majority voting" classification is that the classes with the more frequent examples tend to dominate the prediction of the new vector, as they tend to come up in the k nearest neighbors when the neighbors are computed due to their large number.[2] One way to overcome this problem is to weigh the classification taking into account the distance from the test point to each of its k nearest neighbors.

KNN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel.[3] [4]

Parameter selection [link]

The best choice of k depends upon the data; generally, larger values of k reduce the effect of noise on the classification[citation needed], but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques, for example, cross-validation. The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm.

The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling.[5] Another popular approach is to scale features by the mutual information of the training data with the training classes.[citation needed]

In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method.[6]

Properties [link]

The naive version of the algorithm is easy to implement by computing the distances from the test sample to all stored vectors, but it is computationally intensive, especially when the size of the training set grows. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. Using an appropriate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets.

The nearest neighbor algorithm has some strong consistency results. As the amount of data approaches infinity, the algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data).[7] k-nearest neighbor is guaranteed to approach the Bayes error rate, for some value of k (where k increases as a function of the number of data points). Various improvements to k-nearest neighbor methods are possible by using proximity graphs.[8]

For estimating continuous variables [link]

The k-NN algorithm can also be adapted for use in estimating continuous variables. One such implementation uses an inverse distance weighted average of the k-nearest multivariate neighbors. This algorithm functions as follows:

  1. Compute Euclidean or Mahalanobis distance from target plot to those that were sampled.
  2. Order samples taking for account calculated distances.
  3. Choose heuristically optimal k nearest neighbor based on RMSE done by cross validation technique.
  4. Calculate an inverse distance weighted average with the k-nearest multivariate neighbors.

Using a weighted k-NN also significantly improves the results: the class (or value, in regression problems) of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance between that point and the point for which the class is to be predicted.

See also [link]

References [link]

  1. ^ Bremner D, Demaine E, Erickson J, Iacono J, Langerman S, Morin P, Toussaint G (2005). "Output-sensitive algorithms for computing nearest-neighbor decision boundaries". Discrete and Computational Geometry 33 (4): 593–604. DOI:10.1007/s00454-004-1152-0. 
  2. ^ D. Coomans; D.L. Massart (1982). "Alternative k-nearest neighbour rules in supervised pattern recognition : Part 1. k-Nearest neighbour classification by using alternative voting rules". Analytica Chimica Acta 136: 15–27. DOI:10.1016/S0003-2670(01)95359-0. 
  3. ^ D. G. Terrell; D. W. Scott (1992). "Variable kernel density estimation". Annals of Statistics 20 (3): 1236–1265. DOI:10.1214/aos/1176348768. 
  4. ^ Mills, Peter. "Efficient statistical classification of satellite measurements". International Journal of Remote Sensing. 
  5. ^ Nigsch F, Bender A, van Buuren B, Tissen J, Nigsch E, Mitchell JB (2006). "Melting point prediction employing k-nearest neighbor algorithms and genetic parameter optimization". Journal of Chemical Information and Modeling 46 (6): 2412–2422. DOI:10.1021/ci060149f. PMID 17125183. 
  6. ^ Hall P, Park BU, Samworth RJ (2008). "Choice of neighbor order in nearest-neighbor classification". Annals of Statistics 36 (5): 2135–2152. DOI:10.1214/07-AOS537. 
  7. ^ Cover TM, Hart PE (1967). "Nearest neighbor pattern classification". IEEE Transactions on Information Theory 13 (1): 21–27. DOI:10.1109/TIT.1967.1053964. 
  8. ^ Toussaint GT (April 2005). "Geometric proximity graphs for improving nearest neighbor methods in instance-based learning and data mining". International Journal of Computational Geometry and Applications 15 (2): 101–150. DOI:10.1142/S0218195905001622. 

Further reading [link]

External links [link]


https://wn.com/K-nearest_neighbor_algorithm

Korea New Network

Korea New Network (KNN)(KRX: 058400), formerly PuSan Broadcasting Corporation (PSB), is the biggest regional free-to-air commercial broadcasting station based in Centum City High-tech Media district of Busan, South Korea, and affiliated with the SBS Network. Its own programs make up 35 percent of all programs.

See also

  • Busan and Gyeongsangnam-do
  • Centum City
  • Nexen Tire - the biggest shareholder of this broadcaster.
  • Lotte Giants and NC Dinos - KNN Radio provides almost every single KBO League Baseball game of both regional teams.
  • Busan International Film Festival - KNN is the official sponsor and broadcaster of this festival.
  • Busan International Comedy Festival - KNN is the official sponsor and broadcaster of this festival.
  • References

  • KNN to be listed on KOSDAQ(Korean), JoongAng Ilbo. Retrieved October 17, 2011.
  • Korea New Network Corp Announces Changes in Shareholding Structure, Reuters, June 7, 2013. Retrieved March 17, 2015.
  • Korea New Network Corp., The Wall Street Journal, Retrieved March 17, 2015.
  • Podcasts:

    PLAYLIST TIME:

    Don't Call Me

    by: Ne-Yo

    Verse 1: I became her lover, she became my lover. Had her heart broke, I was helping her recover. I became the man she knew she could rely on. Somebody to listen or shoulder to cry on. She was getting better, better she was getting. The more time spent, the more she would forget him. But then back around he seen that he started coming, and shes on the low taking phone calls from him. I thought that we were building something strong. He apologizes and now your gone. And I won't put up with this.
    Pre-chorus: You wanna go. (Go.) Just hope you know. Baby your gonna be lonely, lonely again. Oh lonely, lonely. Go on go, go. Just hope you know. (Hey.) Baby your gonna be lonely, lonely again. Lonely, lonely again.
    Chorus: Don't call me when your lonely again. When your lonely again, lonely again. Oh, sugga don't call me when your lonely again, when your lonely again. Lonely again. (Oh.)




    ×