0% found this document useful (0 votes)
93 views21 pages

Uni2 NNDL

The document provides information about supervised learning networks and shallow neural networks. It discusses perceptron networks, including their theory, architecture using diagrams, and training algorithms for single and multiple output classes. It also describes the training process using flowcharts and pseudocode. The key aspects covered are the perceptron learning rule, units in a perceptron network including sensory, response and output units, weight updating, and training algorithms for perceptrons with single and multiple outputs.

Uploaded by

SONY P J 2248440
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views21 pages

Uni2 NNDL

The document provides information about supervised learning networks and shallow neural networks. It discusses perceptron networks, including their theory, architecture using diagrams, and training algorithms for single and multiple output classes. It also describes the training process using flowcharts and pseudocode. The key aspects covered are the perceptron learning rule, units in a perceptron network including sensory, response and output units, weight updating, and training algorithms for perceptrons with single and multiple outputs.

Uploaded by

SONY P J 2248440
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Unit-2

SUPERVISED LEARNING
NETWORK
BY
Dr Deepa S
.
• SYLLABUS
• Shallow neural networks- Perceptron Networks-Theory-Perceptron
Learning Rule Architecture-Flowchart for training Process-Perceptron
Training Algorithm for Single and Multiple Output Classes.
• Back Propagation Network- Theory-Architecture-Flowchart for
training process. Training Algorithm-Learning Factors for
Back-Propagation Network.
• Radial Basis Function Network RBFN: Theory, Architecture, Flowchart
and Algorithm.
Shallow neural networks
• Shallow neural networks, also known as single-layer neural networks or
perceptrons, are a type of neural network architecture that consists of a
single layer of artificial neurons.
• Unlike deep neural networks, which have multiple hidden layers between
the input and output layers, shallow neural networks have just one hidden
layer (if any) and directly connect the input to the output layer.
• Shallow neural networks were among the earliest neural network models
developed, with the perceptron, proposed by Frank Rosenblatt in 1957,
being one of the first artificial neural networks. However, their limited
depth makes them less capable of learning complex representations
compared to deep neural networks.
• Perceptionnetworks : Perceptionnetworks come under single-layer feed-forward
networks and are also called simple perceptrons.
• Original Perception network
.
• A Perception network with its three units is shown in Figure below:
• 1. Sensory Unit:
• The sensory unit (also known as the input unit) is responsible for receiving the input data and
passing it into the network. In the context of the perceptron model, the input data is typically
represented as a feature vector, where each element of the vector corresponds to a specific input
feature.
• For example, in a simple binary classification problem, the sensory unit may receive a
two-dimensional input vector (x1, x2) representing two features of the input data.
• 2. Response Unit:
• The response unit (also known as the hidden unit) is the processing unit within the perceptron
network. It takes the input from the sensory unit and applies a weighted sum of the inputs,
followed by an activation function.
• Mathematically, the response unit computes the net input (z) as the weighted sum of the input
features (x) and their associated weights (w), along with an optional bias term (b):
• z = (w1 * x1) + (w2 * x2) + ... + (wn * xn) + b
• After calculating the net input, the response unit applies an activation function (often a nonlinear
function) to the net input to produce an output signal.
Original Perception network
• 3. Output Unit:
• The output unit receives the output signal from the response unit and generates the final
output of the perceptron network. In binary classification tasks, the output unit typically
applies a threshold function to convert the continuous output from the response unit into
a binary output (0 or 1).
• For example, a common threshold function used in binary classification is the step
function, which can be defined as follows:
• output = 1, if z >= 0
• output = 0, if z < 0
• The output of the output unit represents the prediction or classification made by the
perceptron network based on the input data and the learned weights.
• It's important to note that the terms "sensory unit," "response unit," and "output unit"
are more commonly used in the historical context of the original perceptron model and
are less frequently used in modern deep learning terminologies.
Original Perception network
• Learning rule
• In case of the perceptron learning rule, the learning signal is the difference between the desired
.
and actual response of a neuron.
• The perceptron learning rule is explained as follows:
• Consider a finite "n" number of input training vectors, with their associated target values x(n)and
t{n), where "n" ranges from 1 to N.
• The target is either+ 1 or -1.
• The output ''y" is obtained on the basis of the net input calculated and activation function being
applied over the net input.

• The weight updation in case of perceptron learning is as shown.


Architecture
• A simple perceptron network
architecture is shown in Figure
• In Figure, there are n input
neurons, 1 output neuron and a
bias.
• The input-layer and output layer
neurons are connected through a
directed communication link, which
is associated with weights.
• The goal of the perceptron net is to
classify the input pattern as a
member or not a member to a
particular class.
Training algorithm/process
Flowchart for Training Process

•.
Flowchart for Training Process
• The flowchart depicted here presents the flow of the training
process.
• As depicted in the flowchart, first the basic initialization
required for the training process is performed.
• The entire loop of the training process continues until the training
input pair is presented to the network.
The training (weight updation) is done on the basis of the comparison
between the calculated and desired output.
The loop is terminated if there is no change in weight
• What is the learning algorithm?
• It is an adaptive method that self-arranges a network of computing
units to implement the required behavior. Some of these algorithms
do this by bringing in front of the network a few examples of the
required input-output mapping.
• The perceptron algorithm, in its most basic form, finds its use in the
binary classification of data. Perceptron takes its name from the basic
unit of a neuron, which also goes by the same name.
Perceptron Training Algorithm for Single
Output Classes
• The perceptron algorithm can be used for either binary or bipolar
input vectors, having bipolar targets, threshold being fixed and
variable bias.
• In the algorithm below, initially the inputs are assigned. Then the net
input is calculated. The output of the network is obtained by applying
the activation function over the calculated net input.
• On performing comparison over the calculated and the desired
output, the weight updation process is carried out. The entire
network is trained based on the mentioned stopping criterion.
•:
Perceptron Training Algorithm for Single
Output Classes
• The algorithm of a perceptron network is as follows
• Step 0:Initialize the weights and the bias. Also initialize the learning
• rate α(O <α ≤1). For simplicity α is set to 1.
• Step 1:Perform Steps 2-6 until the final stopping condition is false.
• Step 2: Perform Steps 3-5 for each training pair indicated by s:t.
• Step 3: The input layer containing input units is applied with identity
• activation functions:
• Xi = Si
• Step 4: Calculate the output of the network. To do so, first obtain the net
• input:

.
• Where "n" is the number of input neurons in the input layer. Then apply activations over
the netinput calculated to obtain the output:

• Step 5:Weight and bias adjustment: Compare the value of the actual (calculated) output
and desired(target) output.

• Step 6: Train the network until there is no weight change. This is the stopping condition
for the network. If this condition is not met, then start again from Step 2.
Single output class
Perceptron Training Algorithm for Multiple
Output Classes
• Step 0:Initialize the weights, biases and learning rate suitably.
• Step 1: Check for stopping condition; if it is false, perform Steps 2-6.
• Step 2: Perform Steps 3--5 for each bipolar or binary training vector
pair s:t.
• Step 3: Set activation (identity) of each input unit i= 1 ton:
xi=si
• Step 4:Calculate output response of each output unit j=1 to m; First
the net input is calculated as:
• Then activations are applied over the net input to calculate the output
response:
.
• Step 5: Make adjustment in weights and bias for j =1 to m and i= 1to n

• Step 6: Test for the stopping condition, i.e., if there is no change in


• weights then stop the training process, else start again from Step 2.
• Perceptron Network Testing Algorithm
• The testing algorithm is asfollows:
• Step 0: The initial weights to be used here are taken from the training
• algorithms.
• Step 1: For each input vector X to be classified, perform Steps 2-3.
• Step 2: Set activations of the input unit.
• Step 3: Obtain the response of output unit.

You might also like