0% found this document useful (0 votes)
13 views

Types of Algorithm That Can Be Used in Machine Learning

machine learning

Uploaded by

Chet Palomares
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Types of Algorithm That Can Be Used in Machine Learning

machine learning

Uploaded by

Chet Palomares
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Types Of algorithm that can be used in Machine learning

K nearest neighbor source:


http://www.saedsayad.com/k_nearest_neighbors.htm
K nearest neighbor is an algorithm that stores all available cases and classifies new cases based on the
similarity of measures.
Algorithm
A case is classified by its neighbors. One of the techniques that can be analyzed most was the k nearest
neighbor measured by a distance function. If K = 1, then the case is simply assigned to the appropriate
nearest neighbor. The first sets of equation are valid only for continuous variables.

Euclidean:

( x i y i )2
i=1

Manhattan:
k

|x i yi|
i=1

Minkowski:
k

1
q q

( (|xi y i|) )
i=1

For an instance of discrete variables the Hamming Distance equation must be used. Hamming Distance
equation standardizes the numerical variable between 0 and 1 when there is a combination of numerical
and categorical variables in data set.

Hamming Distance:
k

D H = |x i y i|
i=1

x= y D=0
x y D=1

SOM (Self Organizing Maps) John A. Bullinaria, 2004; Introduction to Neural Networks :
Lecture 17
SOM system also known as Konohen Network has a feed-forward structure with a single computational
layer of neurons arranged in rows and columns. Each neuron is fully connected to all the source units in
the input layer:
G9

One Dimensional Map will just have a single row or column in the computational layer.
Algorithm
The stages of the SOM algorithm that achieves this can be summarized as follows:
1. Initialization Choose random values for the initial weight vectors

wj

2. Sampling Draw a sample training input vector x from the input space.
3. Matching find the winning neuron I(x) that has weight vector closest to the input vector
4. Updating Apply the weight update equation

wij =n ( t ) T j ,l ( x ) ( t )( x iwij ) where

T j ,l ( x ) ( t ) is a Gaussian neighborhood and n ( t ) is the learning rate.


5. Continuation keep returning to step 2 until the feature map stops changing.

You might also like