Lecture 2
Lecture 2
LECTURE 2
24/02/2025
Learning
• The property that is of primary significance for a neural
network is the ability of the network to learn from its
environment and to improve its performance through
learning. The improvement in performance takes place over
time in accordance with some prescribed measure.
• A neural network learns about its environment through an
interactive process of adjustments applied to its synaptic
weights and bias levels.
*Reference: Alpaydın E,
Introduction to Machine
Learning, 2010.
• Another example is a robot that is placed in a maze.
• The robot can move in one of the four compass
directions and should make a sequence of movements
to reach the exit.
• As long as the robot is in the maze, there is no
feedback and the robot tries many moves until it
reaches the exit and only then does it get a reward.
• In this case there is no opponent, but we can have a
preference for shorter trajectories, implying that in this
case we play against time.
Similarly:
K means clustering
K nearest neighbour
Kohonen’s SOM
Singular value decomposition
Voronoi diagrams
For example:
5. Continuation.
Continue with step 2 until no noticeable changes in the feature map
are observed.
VORONOI DIAGRAMS
• A Voronoi diagram is a partition of a plane into regions close to each of a
given set of objects. In the simplest case, these objects are just finitely
many points in the plane (called seeds, sites, or generators). For each seed
there is a corresponding region consisting of all points of the plane closer
to that seed than to any other. These regions are called Voronoi cells.
• The Voronoi diagram is named after Georgy Voronoy, and is also called
a Voronoi tessellation, a Voronoi decomposition, a Voronoi partition, or
a Dirichlet tessellation (after Peter Gustav Lejeune Dirichlet). Voronoi cells
are also known as Thiessen polygons. Voronoi diagrams have practical and
theoretical applications in many fields, mainly in science and technology,
but also in visual art.
(Reference: Wikipedia)
Euclidean distance:
Manhattan distance:
(Reference: Wikipedia)
Learning Algorithms
n: discrete time
Desired response/target output
Actual output signal
Error signal
Learning rate
Case 2 If and
Case 3 If and
Case 4 If and
Hebbian Learning
Hebb was a neuroanatomist
Hebb’s Postulate of Learning from The Organization of Behaviour (1949):
When an axon of cell A is near enough to excite a cell B and repeatedly
or persistently takes part in firing it, some growth process or metabolic
changes take place in one or both cells such that A’s efficiency as one of
the cells firing B, is increased.
(There is physiological evidence of Hebbian learning in hippocampus part of the
brain)
Learning Rule:
Hebb’s Hypothesis:
This is also called the activity-product rule. The change on the weight
is proportional to the product of the input(presynaptic signal) and
output (postsynaptic signal)
Covariance Hypothesis (Sejnowski, 1977) :
Convergence to a nontrivial state, which is reached when
or
Synaptic weight is enhenced if there are sufficient levels
of presynaptic and postsynaptic activities, that is the
conditions and are both satisfied.
Synaptic weight is depressed if there is either
and
or and
Competitive Learning
The output neurons of a neural network compete among
themselves to become active (fired).
Only a single output neuron is active at any one time.
It is suitable to discover statistical features to classify a set of input
patterns.