0% found this document useful (0 votes)
29 views77 pages

ANN Unit 1

aritifical notes

Uploaded by

trippin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views77 pages

ANN Unit 1

aritifical notes

Uploaded by

trippin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

ARTIFICIAL

NEURAL
NETWORKS

Chaithra K N
Assistant Professor
ECE Department
NMIT
COURSE OUTCOMES
1. Explain the basic models of ANN.
2. Demonstrate the basic learning schemes for
supervised and unsupervised.
3. Illustrate a few Algorithms in Single/Multi-layer
Feed-forward ANN.
4. Implementing basics of CNN using python.
5. Implementing Object detection and Recognition in
Image and Video using CNN and YOLO
Introduction
&
Learning
Processes
SYLLABUS
UNIT 1

 Introduction: What is a neural network? Human brain;


models of a neuron; neural networks as directed graphs;
feedback; network architectures; knowledge representation.

 Learning Processes: Error-correction learning; memory-


based learning; Hebbian learning; competitive learning;
Boltzmann learning; credit-assignment problem; learning with
and without a teacher.

 Textbook #1: 1.1-1.7, 2.1-2.9


Introduction
Neural Network
 Neural Networks are computational models that mimic the
complex functions of the human brain. The neural
networks consist of interconnected nodes or neurons that
process and learn from data, enabling tasks such as pattern
recognition and decision making in machine learning.
Neural Networks in the Brain
 The human brain processes information fundamentally
differently from conventional digital computers.

Key Features:
 Complexity and Parallelism: The brain is highly
complex, nonlinear, and operates in a parallel manner.
 Speed: Neurons are organized to perform tasks much
faster than computers, with visual recognition tasks
typically taking 100–200 milliseconds.
 Plasticity and Learning: The brain's wiring is shaped by
experience through plasticity, making learning a central
aspect of neural networks.
Neural Networks as an Adaptive

Machine
A neural network is akin to an adaptive machine, designed with
a structure like the brain.
1. Acquisition of Knowledge: Just as the brain learns from its
environment, neural networks acquire knowledge through a
learning process. This can involve various techniques like
supervised learning, unsupervised learning, or reinforcement
learning.
2. Storage of Knowledge: The acquired knowledge is stored in
the form of interneuron connection strengths, which are termed
synaptic weights in neural networks. These weights represent
the learned relationships between different inputs and outputs.
3. Learning Algorithm: Neural networks employ learning
algorithms to adjust the synaptic weights or even the network's
topology itself. This process allows them to adapt to new
information and improve their performance over time.
Neural Network
 Properties and Capabilities of Neural Network:
Nonlinearity: Utilizes nonlinear components and
distributed nonlinearity for complex processing.
Input-Output Mapping: Achieved through supervised
learning and nonparametric statistical inference, allowing
for model-free estimation.
Adaptivity: Can retain or adapt knowledge, handling
nonstationary environments while managing the stability-
plasticity dilemma.
Evidential Response: Provides decisions along with
confidence levels, offering a more comprehensive output.
Contextual Information: Processes contextual
information naturally, with every neuron potentially
influencing others.
Fault Tolerance: Exhibits graceful degradation in
performance under adverse conditions.
Neural Network
 Properties and Capabilities of Neural Network:
VLSI Implementability: Suitable for implementation using
simple components in VLSI technology.
Uniformity of Analysis and Design: Shares common
components (neurons), theories, and learning algorithms,
facilitating seamless integration based on modularity.
Neurobiological Analogy: Inspired by neurobiology, neural
networks are used in both research and practical applications,
with neurobiology also leveraging neural networks for
insights and tools.
Human Brain
 The key points regarding neural networks and their comparison
to biological systems:
1. Neural Network Architecture: The flow of information in a
neural network mirrors the biological process from stimulus to
response, involving receptors, a neural network, and effectors.
2. Operational Speed: Neurons operate relatively slowly at 10^-
3 seconds per operation, contrasting with the rapid operations
of modern CPUs at 10^-9 seconds.
3. Scale of Neurons and Connections: The human brain
contains approximately 10^10 (or possibly 10^11) neurons and
an estimated 6 x 10^13 connections, highlighting the immense
complexity of neural networks.
4. Energy Efficiency: Despite their complexity, biological brains
are highly energy-efficient, requiring only 10^-16 joules per
operation compared to the significantly higher energy
consumption of modern computers at 10^-6 joules.
The Neuron and the Synapse
The Neuron and the Synapse
The neuron and the synapse are fundamental components
of neural networks:
1.Synapse: The meeting point between two neurons.
2.Presynaptic Neuron: The neuron that sends signals
across the synapse.
3.Postsynaptic Neuron: The neuron that receives signals
from the presynaptic neuron.
4.Neurotransmitters: Molecules that traverse the synapse
and have either a positive, negative, or modulatory effect
on the activation of the postsynaptic neuron.
5.Dendrite: A branch of the neuron that receives input
signals.
6.Axon: A branch of the neuron that sends out output
signals, typically in the form of spikes or action potentials
that trigger the release of neurotransmitters at axon
terminals.
Structural Organization of the Brain
Molecules and Synapses: Transmit signals in neural
microcircuits.
Neural Microcircuits: Form dendritic trees and
neurons.
Dendritic Trees and Neurons: Basic units forming
local circuits.
Local Circuits: Handle specific functions in localized
brain regions.
Interregional Circuits: Include pathways, columns,
and topographic maps.
Pathways, Columns, and Maps: Connect and
organize brain regions based on function and location.
Central Nervous System Functioning: Hierarchical
integration of these components allows for complex
information processing, behavior coordination, and
cognitive functions.
Models of a Neuron
 Neuron is an information processing unit – fundamental to
the operation of a neural network.
 Block diagram shows the model of a neuron which forms the
basis for designing artificial neural network.

 Activation function – limits the amplitude of the output of a


neuron. Also referred as Squashing Function. It squashes the
permissible amplitude range of the output signal to some
Models of a Neuron
 The normalized amplitude range of the output of a neuron is
given by [0, 1] or [-1,1].
 The bias bk has the effect of increasing or lowering the net
input of the activation function depending on its value.
 Mathematically,

 Or

 The use of bk has the effect of applying an affine


transformation to the output uk.
 Let called induced local field.
Models of a Neuron
 The relationship between induced local field or activation
potential vk and the linear combiner output uk is modified.
 The result of affine transformation, the graph of vk versus uk no
longer passes through the origin.
 bk is an external parameter of
Artificial neuron.

 The effect of bias is accounted for


i. Adding a new input signal fixed at +1
ii. Adding a new synaptic weight
equal to the bias bk
Models of a Neuron
 Types of activation function:
 The activation function denotes the output of a neuron
interms of the induced local field v.
1. Threshold Function: Also known as Heaviside function.

 The output of neuron k after employing Threshold Function


Models of a Neuron
Models of a Neuron
2. Sigmoid Function: Graph is “S”-shaped, the most
common form of activation function used in the construction
of neural networks.
 It is defined as a increasing function that exhibits a
balance between linear and nonlinear behavior.
 Ex: logistic function given by

where a is the slope parameter of the sigmoid function.


 By varying the parameter a, sigmoid functions of
different slopes are obtained.
 The slope at the origin equals a/4.
 As the slope parameter approaches infinity, the sigmoid
function becomes simply a threshold function.
 The sigmoid function is differentiable, whereas the
threshold function is not.
Models of a Neuron
3. Signum function: The activation function range from -1
to 1, the activation function is an odd function of the induced
local field.

4. Hyperbolic tangent function:


Models of a Neuron
Stochastic Model of a Neuron:
 The neural model described is deterministic , i.e, its
input–output behavior is precisely defined for all inputs.
 For some applications of neural networks, it is desirable to
base the analysis on a stochastic neural model.
 The activation function is given a probabilistic
interpretation. A neuron is permitted to reside in only one
of two states: +1 or -1. The decision for a neuron to fire
(i.e., switch its state from “off” to “on”) is probabilistic.
 Let x denote the state of the neuron and P(v) denote the
probability of firing, where v is the induced local field of
the neuron.

 A standard choice for P(v) is


the sigmoid-shaped function
Neural Networks as Directed graphs
 The block diagram provides a functional description of the
various elements that constitute the model of an artificial
neuron.
 Directed graphs simplify the model by using the idea of
signal-flow graphs without sacrificing any of the functional
details of the model.
 A signal-flow graph is a network of directed links
(branches) that are interconnected at certain points called
nodes.
 A typical node j has an associated node signal xj . A typical
directed link originates at node j and terminates on node k;
it has an associated transfer function, or transmittance, that
specifies the manner in which the signal yk at node k
depends on the signal xj at node j.
Neural Networks as Directed graphs
 Nodes and Links: Nodes represent various components
such as neurons or layers, while links represent connections
between them. Synaptic links convey information between
neurons, while activation links represent the activation of
neurons.
 Incoming Edges: These edges represent the summation of
input signals from connected nodes or neurons.
 Outgoing Edges: Outgoing edges typically represent the
replication or transmission of processed signals to other
nodes or layers in the network.
 Simplify the neural network's structure by abstracting away
the internal neuronal functions. This higher-level
representation focuses more on the overall connectivity and
flow of information between different components rather
than the specific internal operations within each neuron or
Neural Networks as Directed graphs
 The flow of signals in the various parts of the graph is
dictated by three basic rules:
Rule 1. A signal flows along a link only in the direction
defined by the arrow on the link.
Types of links:
• Synaptic links, whose behavior is governed by a linear
input–output relation. Specifically, the node signal xj is
multiplied by the synaptic weight wkj to produce the node
signal yk,.

• Activation links, whose behavior is governed in general by a


nonlinear input–output relation (·) is the nonlinear activation
function.
Neural Networks as Directed graphs
Rule 2. A node signal equals the algebraic sum of all signals
entering the pertinent node via the incoming links.
 This is illustrated for the case of synaptic convergence, or
fan-in.

Rule 3. The signal at a node is transmitted to each outgoing


link originating from that node, with the transmission being
entirely independent of the transfer functions of the outgoing
links.
 This is illustrated for the case of synaptic divergence, or
fan-out.
Neural Networks as Directed graphs
Neural Networks as Directed graphs
 Following are the mathematical definition of a neural
network:
 A neural network is a directed graph consisting of nodes
with interconnecting synaptic and activation links and is
characterized by four properties:
1. Each neuron is represented by a set of linear synaptic
links, an externally applied bias, and a possibly nonlinear
activation link. The bias is represented by a synaptic link
connected to an input fixed at +1.
2. The synaptic links of a neuron weight their respective
input signals.
3. The weighted sum of the input signals defines the induced
local field of the neuron in question.
4. The activation link squashes the induced local field of the
neuron to produce an output.
Neural Networks as Directed graphs
 A directed graph, describes not only the signal flow from
neuron to neuron, but also the signal flow inside each
neuron.
 Reduced form: Restricted to signal flow from neuron to
neuron, by omitting the details of signal flow inside the
individual neurons - partially complete.
 It is characterized as follows:
1. Source nodes supply input signals to the graph.
2. Each neuron is represented by a single node called a
computation node.
3. The communication links interconnecting the source and
computation nodes of the graph carry no weight; they merely
provide directions of signal flow in the graph.
 A partially complete directed graph - Architectural graph,
describing the layout of the neural network.
Neural Networks as Directed graphs
 The computation node representing the neuron is shaded
and the source node is shown as a small square.

 There are three graphical


representations of a neural network:
• Block diagram, providing a functional description of the
network
• Architectural graph, describing the network layout;
• Signal-flow graph, providing a complete description of
signal flow in the network.
Feedback
 Feedback exist in a dynamic system whenever the output of an
element in the system influences in part the input applied to
that particular element, thereby giving rise to one or more
closed paths for the transmission of signals around the system.
 Figure shows the signal-flow graph of a single loop feedback
system, where the input signal xj(n), internal signal xj’(n), and
output signal yk(n) are functions of the discrete-time variable
n.

 The system is assumed to be linear, consisting of a forward


path and a feedback path that are characterized by the
“operators” A and B, respectively.
 The output of the forward channel determines in part its own
output through the feedback channel.
Feedback
 Input – output relationship is given by

 A/(1 - AB) is the closed-loop operator of the system, and to


AB is the open-loop operator. In general, the open-loop
operator is noncommutative
 Consider the single-loop feedback system where A is a
fixed weight w and B is a unit-delay operator z-1 , whose
output is delayed with respect to the input by one time unit.
 The closed-loop operator of the system is given by
Neural Networks as Directed graphs
 The dynamic behavior of a feedback system represented by
the signal-flow graph is controlled by the weight w.
 There are two specific cases:
 1. for which the output signal yk(n) is
exponentially convergent; i.e, the system is stable.
 2. for which the output signal yk(n) is divergent;
i.e, the system is unstable.
 If the divergence is linear, and if the
divergence is exponential.
NETWORK ARCHITECTURES
 There are three fundamentally different classes of network
architectures:
1. Single-Layer Feedforward Networks
 In a layered neural network, the neurons are
organized in the form of layers.
 In the simplest form of a layered network,
input layer of source nodes projects directly
onto an output layer of neurons (computation nodes).
 This network is strictly of a feedforward type.
 Consider the case of four nodes in both the input and output
layers.
 “Single-layer” referring to the output layer of computation
nodes (neurons).
 Input layer of source nodes are not considered because no
computation is performed there.
NETWORK ARCHITECTURES
(ii) Multilayer Feedforward Networks
 The presence of one or more hidden layers, whose
computation nodes are correspondingly called hidden neurons
or hidden units; the term “hidden” refers to the fact that this
part of the neural network is not seen directly from either the
input or output of the network.
 The function of hidden neurons is to intervene between the
external input and the network output.
 By adding one or more hidden layers,
the network is enabled to extract
higher-order statistics from its input.
 The network is referred to as a 10–4–2
network because it has 10 source nodes,
4 hidden neurons, &
2 output neurons.
NETWORK ARCHITECTURES
 The source nodes in the input layer of the network supply
respective elements of the activation pattern (input vector),
which constitute the input signals applied to the neurons
(computation nodes) in the second layer (i.e., the first
hidden layer).
 The output signals of the second layer are used as inputs to
the third layer, and so on.
 The neurons in each layer of the network have as their
inputs the output signals of the preceding layer only.
 The set of output signals of the neurons in the output (final)
layer of the network constitutes the overall response of the
network to the activation pattern supplied by the source
nodes in the input (first) layer.
NETWORK ARCHITECTURES
 In general, a feedforward network with m source nodes, h1
neurons in the first hidden layer, h2 neurons in the second
hidden layer, and q neurons in the output layer is referred to
as an m–h1–h2–q network.
 Fully connected means that every node in each layer of the
network is connected to every other node in the adjacent
forward layer.
 If some of the communication links (synaptic connections)
are missing from the network, the network is partially
connected.
NETWORK ARCHITECTURES
(iii) Recurrent Networks
 It has at least one feedback loop.
 Ex: Consider a recurrent network consists of a single layer
of neurons with each neuron feeding its output signal back
to the inputs of all the other neurons.
 There are no self-feedback loops
in the network; self-feedback refers
to a situation where the output of a
neuron is fed back into its own input.
NETWORK ARCHITECTURES
 Another class of recurrent networks with hidden neurons.
 The feedback connections originate from the hidden
neurons as well as from the output neurons.
 The presence of feedback loops impact on the learning
capability of the network and on its performance.
 Moreover, the feedback loops
involve the use of particular
branches composed of unit-time
delay elements (denoted by z-1 ),
which result in a nonlinear dynamic
behavior, assuming that the neural
network contains nonlinear units.
KNOWLEDGE REPRESENTATION
 In defining a neural network, “Knowledge" is crucial, and
it is described as stored information or models used by
individuals or machines to interpret, predict, and
appropriately respond to the outside world.

Key Characteristics of Knowledge Representation


 Explicit Information: What information is explicitly
stated.
 Physical Encoding: How information is physically
encoded for later use.
 Knowledge representation is inherently goal-directed. In
real-world applications of intelligent machines, effective
solutions depend on good knowledge representation.
Neural networks, face a design challenge due to the
diversity of possible representations from inputs to internal
KNOWLEDGE REPRESENTATION
Learning and Maintaining a Model
 A neural network's primary task is to learn and maintain a
model of the world (environment) in which it operates,
ensuring the model aligns with reality to achieve the
specified goals of the application. Knowledge of the world
consists of two types of information:
 Observations, which are inherently noisy due to sensor
imperfections, provide the information pool for training
neural networks. Examples derived from these observations
can be labeled or unlabeled:
 Labeled Examples: Each input signal is paired with a
desired response (target output). They are expensive to
collect as they require a "teacher."
 Unlabeled Examples: Consist only of input signals and are
usually abundant.
KNOWLEDGE REPRESENTATION
Training and Testing
 A set of input-output pairs, called training data, is used to
train the neural network. For instance, in a handwritten-
digit recognition problem:
 Training: The network learns from a variety of labeled
examples (images of digits).
 Testing: The network's performance is assessed with new
data by comparing its output to the actual digit identity.

Design Process
 Designing a neural network involves:
 Selecting an Appropriate Architecture: For example, an
input layer equal to the number of pixels in an image and
an output layer with neurons for each digit.
KNOWLEDGE REPRESENTATION
 Training the Network: Using a subset of examples and a
suitable algorithm.
 Testing the Network: With unseen data to assess
generalization, or how well the network performs on new
examples.

Comparison with Classical Pattern Classifiers


 Unlike classical information-processing systems that rely
on pre-formulated mathematical models, neural networks
learn directly from real-life data, allowing the data to
define the model and perform the information-processing
function.
KNOWLEDGE REPRESENTATION
Positive and Negative Examples
 Training data can include both positive and negative
examples to improve performance. For instance, in sonar
detection, positive examples may include the target (e.g., a
submarine), and negative examples may include non-
targets (e.g., marine life) to reduce false alarms.
 Knowledge Representation
 In a neural network, knowledge of the environment is
represented by the values of free parameters (synaptic
weights and biases). This representation is key to the
network's performance and defines its very design.
KNOWLEDGE REPRESENTATION
Building Prior Information into Neural Network Design
Incorporating prior information into the design of a neural
network involves developing specialized structures.
Commonly used techniques are:
1. Restricting Network Architecture: This is achieved using
local connections known as receptive fields. A receptive field
is a region of the input field that influences the output of a
neuron.
2. Constraining Synaptic Weights: Implemented through
weight-sharing, where the same set of synaptic weights is used
for multiple neurons.
 Benefits of These Techniques
 Reduced Free Parameters: Both techniques help in
significantly reducing the number of free parameters in the
network, making the design more efficient.
KNOWLEDGE REPRESENTATION
 Ex: Partially Connected Feedforward
Network
 Receptive Fields: In a network, the
top six source nodes could form the
receptive field for hidden neuron 1.
This mapping helps in understanding
the neuron's behavior.
 Weight-Sharing: By using the same
set of weights for neurons in the
hidden layer, the induced local field
of hidden neuron j can be expressed
as:

 This equation represents a


convolution sum, leading to the term
Convolutional Network.
KNOWLEDGE REPRESENTATION
To Build Invariances into Neural Network Design
 To develop systems capable of recognizing objects, radar
targets, or speech regardless of transformations like
rotation, frequency shift, or variations in speech, neural
networks must achieve invariance to these
transformations. Three techniques can be used:
1. Invariance by Structure:
 Method: Design the network so that transformed
versions of the same input produce the same output.
 Example: Enforcing synaptic weights to be the same
for pixels equidistant from the center of an image to
achieve rotational invariance.
 Drawback: Can result in a prohibitively large number
of synaptic connections.
KNOWLEDGE REPRESENTATION
2. Invariance by Training:
 Method: Train the network with various transformed
examples of the same object.
 Advantage: Utilizes the network’s natural pattern
classification ability.
 Drawbacks: May not generalize to new objects and can
be computationally demanding.
3. Invariant Feature Space:
 Method: Extract features that are invariant to
transformations and use them for classification.
 Advantages:
Reduces the number of features to realistic levels.
Relaxes network design requirements.
Ensures invariance for all objects with respect to
known transformations.
Learning
Processes
Learning Processes
Introduction
 The process of learn from its environment, and improve
its performance through learning.
 Iterative adjustment of synaptic weights.
 Learning: Definition by Mendel and McClaren: Learning
is a process by which the free parameters of a neural
network are adapted through a process of stimulation by
the environment in which the network is embedded. The
type of learning is determined by the manner in which the
parameter changes take place.
Learning Processes
 A prescribed set of well-defined rules for the solution of
the learning problem is called a learning algorithm.
 Five basic learning rules includes:
1. Error correction learning
2. Hebbian learning
3. Memory-based learning
4. Competetive learning
5. Boltzmann learning
 Learning paradigms:
1. Credit - assignment problem
2. Learning with and without a teacher : Supervised
learning, Reinforcement Learning & Unsupervised
learning
Error-Correction Learning
Hebbian Learning
Hebbian Learning
Hebbian Learning
Hebbian Learning
Hebbian Learning
Hebbian Learning
Memory-Based Learning
Memory-Based Learning
Competetive Learning
 Output neurons compete with each other for a chance to become
active.
 Suited to discover statistically salient features (classification).
 Three basic elements:
i. Same type of neurons with different weight sets, so that they
respond differently to a given set of inputs.
ii. A limit imposed on the strength of each neuron.
iii. Competition mechanism, to choose one winner: winner-takes-
all neuron.
Competetive Learning
Competetive Learning
Boltzmann Learning
Boltzmann Learning
Boltzmann Learning
Boltzmann Learning
Boltzmann Learning
Boltzmann Learning
Learning with a Teacher
 Also referred to as Supervised Learning.
 Consider the teacher as having knowledge of the environment,
with that knowledge being represented by a set of input–output
examples.
 The environment is, unknown to the neural network.
 Suppose the teacher and the neural network are both exposed to
a training vector drawn from the same environment.
 By virtue of built-in
knowledge, the teacher is able
to provide the neural network
with a desired response for
that training vector.
 The network parameters are
adjusted under the combined
influence of the training vector
and the error signal.
Learning with a Teacher
 Adjustment is carried out iteratively in a step-by-step fashion
with the aim of eventually making the neural network emulate
the teacher; the emulation is presumed to be optimum in some
statistical sense.
 The knowledge of the environment available to the teacher is
transferred to the neural network through training and stored in
the form of “fixed” synaptic weights, representing long-term
memory.
 When this condition is reached, dispense with the teacher and
then the neural network deal with the environment completely
by itself.
 Supervised learning is the basis of Error Correction Learning.
 Supervised-learning process constitutes a closed-loop feedback
system, but the unknown environment is outside the loop.
Learning with a Teacher
 As a performance measure for the system, consider mean square
error, or the sum of squared errors over the training sample,
defined as a function of synaptic weights of the system.
 The true error surface is averaged over all possible input–output
examples.
 Any given operation of the system under the teacher’s
supervision is represented as a point on the error surface.
 The operating point has to move down successively toward a
minimum point of the error surface; the minimum point may be
a local minimum or a global minimum.
 The gradient of the error surface at any point is a vector that
points in the direction of steepest descent.
 Given an algorithm designed to minimize the cost function, an
adequate set of input–output examples, and enough time to do
the training, a supervised learning system is able to approximate
an unknown input–output mapping reasonably well.
Learning without a Teacher
 There is no teacher to oversee the learning process.
 There are no labeled examples of the function to be learned by
the network.
 There are two subcategories :
1. Reinforcement Learning
2. Unsupervised Learning
Learning without a Teacher
1. Reinforcement Learning:
 The learning of an input–output mapping is performed through
continued interaction with the environment.
 Figure shows reinforcement-learning system built around a
critic that converts a primary reinforcement signal received
from the environment into a higher quality reinforcement signal
called the heuristic reinforcement signal.
Learning without a Teacher
1. Reinforcement Learning:
 The goal is to minimize a cost-to-go function, defined as the
expectation of the cumulative cost of actions taken over a
sequence of steps instead of simply the immediate cost.
 Delayed-reinforcement learning is difficult to perform:
• There is no teacher to provide a desired response at each step of
the learning process.
• The delay incurred in the generation of the primary reinforcement
signal implies that the learning machine must solve a temporal
credit assignment problem.
 It provides the basis for the learning system to interact with its
environment, thereby developing the ability to learn to perform
a prescribed task solely on the basis of the outcomes of its
experience that result from the interaction.
Learning without a Teacher
2. Unsupervised Learning / Self-organized learning:
 There is no external teacher or critic to oversee the learning
process.
 Task-independent measure of the quality of representation that
the network is required to learn, and the free parameters of the
network are optimized with respect to that measure.
 For a specific task-independent measure, once the network has
become tuned to the statistical regularities of the input data, the
network develops the ability to form internal representations for
encoding features of the input and thereby to create new classes
automatically.
 To perform unsupervised learning, a competitive-learning rule is
used.
Learning without a Teacher
2. Unsupervised Learning / Self-organized learning:
Ex: Consider a neural network that consists of two layers - an input
layer and a competitive layer.
 The input layer receives the available data.
 The competitive layer consists of neurons that compete with
each other (in accordance with a learning rule) for the
“opportunity” to respond to features contained in the input data.

You might also like