0% found this document useful (0 votes)
17 views2 pages

Kohonen Self Organizing Feature Map Algorithm

kohonen self organizing feature map algorithm notes

Uploaded by

shubhanshu verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

Kohonen Self Organizing Feature Map Algorithm

kohonen self organizing feature map algorithm notes

Uploaded by

shubhanshu verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Powered by AI

Copilot

Certainly! Let’s delve into the fascinating world of Kohonen Self-Organizing Feature Maps (also known as
SOMs or Kohonen Maps). These maps are a type of artificial neural network inspired by biological models of
neural systems from the 1970s. Here’s how they work:

1. Overview:

A Self-Organizing Map (SOM) is an unsupervised learning approach used for clustering and
dimensionality reduction.
It maps high-dimensional data onto a lower-dimensional grid, making complex problems more
interpretable.
SOMs are particularly useful for visualizing and understanding data distributions.

2. Architecture:

A SOM consists of two layers:


Input Layer: Represents the input features.
Output Layer: Comprises a grid of neurons (also called nodes or units).
Neurons in the output layer are arranged in a regular grid (usually 1D, 2D, or 3D).

3. Training Process:

Initialize the weights of the neurons randomly.


Present input data to the network.
Calculate the activation level of each neuron in response to the input data.
Select the neuron with the highest activation level (the “winning” neuron).
Update the weights of neighboring neurons based on the winning neuron’s influence.
Repeat this process iteratively until convergence.

4. Weight Update Rule:

The weight update for neuron (j) and feature (i) is given by: [ w_{ij}^{(new)} = w_{ij}^{(old)} +
\alpha \cdot (x_i - w_{ij}^{(old)}) ] where:
(w_{ij}) represents the weight of neuron (j) for feature (i).
(\alpha) is the learning rate.
(x_i) is the input feature value.

5. Applications:

Clustering: SOMs group similar data points together.


Visualization: They help visualize high-dimensional data in a lower-dimensional space.
Data Exploration: Useful for exploratory data analysis.

Here’s a simple Python example demonstrating the training of a SOM using a small dataset:
import math

class SOM:
def winner(self, weights, sample):
D0, D1 = 0, 0
for i in range(len(sample)):
D0 += math.pow((sample[i] - weights[0][i]), 2)
D1 += math.pow((sample[i] - weights[1][i]), 2)
if D0 < D1:
return 0
else:
return 1

def update(self, weights, sample, J, alpha):


for i in range(len(weights[0])):
weights[J][i] = weights[J][i] + alpha * (sample[i] - weights[J][i])
return weights
def main():
T = [[1, 1, 0, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
m, n = len(T), len(T[0])
weights = [[0.2, 0.6, 0.5, 0.9], [0.8, 0.4, 0.7, 0.3]]
ob = SOM()
epochs = 3
alpha = 0.5
for i in range(epochs):
for j in range(m):
sample = T[j]
J = ob.winner(weights, sample)
weights = ob.update(weights, sample, J, alpha)
s = [0, 0, 0, 1]
J = ob.winner(weights, s)
print("Test Sample s belongs to Cluster:", J)

Feel free to explore SOMs further—they’re a powerful tool for understanding complex data patterns! 😊

You might also like