0% found this document useful (0 votes)
10 views

Module5

Module 5 covers classification models, focusing on biological and artificial neurons, including the McCulloch-Pitts model and various activation functions. It discusses the differences between biological neural networks and artificial neural networks, their architectures, and training prerequisites, as well as learning types like supervised, unsupervised, and reinforcement learning. The module also provides examples of implementing logical functions using the McCulloch-Pitts neuron model and explores linear separability in neural networks.

Uploaded by

sejalshigwan08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Module5

Module 5 covers classification models, focusing on biological and artificial neurons, including the McCulloch-Pitts model and various activation functions. It discusses the differences between biological neural networks and artificial neural networks, their architectures, and training prerequisites, as well as learning types like supervised, unsupervised, and reinforcement learning. The module also provides examples of implementing logical functions using the McCulloch-Pitts neuron model and explores linear separability in neural networks.

Uploaded by

sejalshigwan08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Module 5

CLASSIFICATION MODELS
Content
➢ Biological neuron and Artificial neuron
➢ McCulloch-Pitts Model
➢ Activation Function, various types of Activation Functions and types
of Neural Network Architectures
➢ Prerequisites for Training of Neural Networks. Linearly Separable and
Linearly Non-Separable Systems with examples
➢ Concepts of Supervised Learning, Unsupervised Learning, and
Reinforcement Learning.
➢ Brief survey of applications of Neural Networks.

Course outcome:
Comprehend the concepts of biological neurons and artificial neurons
Artificial Neural Networks
➢ What is natural neural network?
➢ Human brain: highly complex, non-linear, parallel computer
➢ Neuron: structural constituents of the brain

➢ Human Brain: billions of cells and trillions of interconnections


Comparison of Brains and Traditional Computers

• 200 billion neurons, 32 • 1 billion bytes RAM but


trillion synapses trillions of bytes on disk
• Element size: 10-6 m • Element size: 10-9 m
• Energy use: 25W • Energy watt: 30-90W (CPU)
• Processing speed: 100 Hz • Processing speed: 109 Hz
• Parallel, Distributed • Serial, Centralized
• Fault Tolerant • Generally not Fault Tolerant
• Learns: Yes • Learns: Some
• Intelligent/Conscious: • Intelligent/Conscious:
Usually Generally No
Biological Neural Network
Biological Neuron
Biological Neuron:

➢ Dendrites: tree like structural network of nerve fibers carrying electrical


signals to the soma or cell body.
➢ Cell body/ Soma: sums the incoming signals. When sufficient input is
received, the cell fires; that is, it transmits a signal over its axon to other cells.
It also carries nucleus.
➢ Axon: single long fiber that carries the electrical signal from the cell body to
other neurons
➢ Synapse: the point of contact between the axon of one cell and the
dendrite of another, whose strength affects the input to the cell.
An artificial Neuron

It may also have a Bias


An artificial neuron with bias:

yin = x1 w1 + x2 w2 + − − − − − + xm wm + bk
m
yin =  xi wi + bk
i =1
y = f ( yin )
Output = Function (net input calculated)
For example, consider following output.
f ( yin ) = 1 if yin  
0 if yin  
Where, Ɵ is a threshold parameter

An artificial neuron:
- computes the weighted sum of its input (called its net input)
- adds its bias
- passes this value through an activation function
We say that the neuron “ fires” (i.e. becomes active ) if its output is above zero.
Artificial Neural Network(ANN) Biological Neural Network(BNN)
Processing speed is fast as compared to
They are slow in processing information.
Biological Neural Network.

Allocation for Storage to a new process is Allocation for storage to a new process is
strictly irreplaceable as the old location is easy as it is added just by adjusting the
saved for the previous process. interconnection strengths.

The process can operate in massive


Processes operate in sequential mode.
parallel operations.
Information is distributed into the
If any information gets corrupted in the network throughout into sub-nodes,
memory it cannot be retrieved. even if it gets corrupted it can be
retrieved.
There is no control unit to monitor the
The activities are continuously monitored
information being processed into the
by a control unit.
network.

No fault tolerant Fault tolerant


Definition of ANN
“Data processing system consisting of a large number of simple, highly interconnected
processing elements (artificial neurons)in an architecture inspired by the structure of
the cerebral cortex of the brain”

ANNs have been developed as generalizations of mathematical models of neural


biology, based on the assumptions that:
1. Information processing occurs at many simple elements called neurons.
2. Signals are passed between neurons over connection links.
3. Each connection link has an associated weight, which, in typical neural net,
multiplies the signal transmitted.
4. Each neuron applies an activation function to its net input to determine its output
signal.
Definition of Neural Network
McCulloch-Pitts Neuron (M-P Neuron)
McCulloch (neuroscientist) and Pitts (logician) proposed a highly simplified
computational model of the neuron (1943)
Characteristics:
1. Neuron activation is binary. A neuron either fire or not-fire.
2. For a neuron to fire, the weighted sum of inputs has to be equal or larger
than a predefined threshold.
3. If one or more inputs are inhibitory the neuron will not fire.
4. It takes a fixed one time step for the signal to pass through a link
5. Neither the structure nor the weights change over time.
6. The M-P neuron has no particular training algorithm. An analysis has to
be performed manually to determine the values of the weights and the
threshold.
M-P Neuron architecture

M-P neuron has both excitatory and inhibitory connections. It is


excitatory with weight (w > 0) or inhibitory with weight -p (-p < 0).
• Inhibitory:- impulses try to stop firing of the receiving cell
• Excitatory:- impulses cause firing of the receiving cell
• Inputs from Xi to Xn possess excitatory weighted connections.
• Inputs from Xn+1 to Xn+m possess inhibitory weighted connections.
• Since the firing of the output neuron is based upon the threshold,
the activation function is defined as
f ( yin ) = 1 if yin  
0 if yin  
• For inhibition to be absolute, the threshold with the activation
function should satisfy the following condition:

• Output will fire if it receives “k” or more excitatory inputs but no


inhibitory.
Example 1:
For the network shown in Figure I, calculate the net input to the
output neuron.

Given data:
[x1, x2, x3] = [0.3, 0.5, 0.6] and [w1, w2, w3] = [0.2, 0.1, -0.3]
The net input can be calculated as,
Example 2: Calculate the net input for the network shown in figure
with bias included in the network?
Example 3: Implement AND function using McCulloch-Pitts
Neuron (take binary data).
SOLUTION: x1 x2 y
Truth table of AND gate is
0 0 0
In McCulloch-Pitts Neuron only analysis
is performed. Hence, assume weights 0 1 0
be w1 = w2 = 1. 1 0 0
The network architecture is 1 1 1
With these assumed weights the net input is calculated for four inputs,

For AND function, the output is high if both the inputs are high.
For this function, the net input is calculated as 2. Hence, based on this input
the threshold value is set, i.e., if the output value is greater than or equal to 2
then the neuron fires, else it does not fire.
So, the threshold value is set to 2 (θ=2).
This can be obtained by,
Example 4: Implement OR function using McCulloch-Pitts
Neuron (take binary data).
SOLUTION: x1 x2 y
Truth table of OR gate is
0 0 0
In McCulloch-Pitts Neuron only analysis
is performed. Hence, assume weights 0 1 1
be w1 = w2 = 1. 1 0 1
The network architecture is 1 1 1
With these assumed weights the net input is calculated for four
inputs,

For OR function, the output is high if any one input is high.


For this function, the net input is calculated as 1 and 2. Hence,
based on this input the threshold value is set, i.e., if the output
value is greater than or equal to 1 then the neuron fires, else it
does not fire. So, the threshold value is set to 1 (θ=1).
Example 5: Implement AND NOT function using McCulloch-Pitts
Neuron (take binary data).
SOLUTION: x1 x2 y
In the AND NOT function, output is true if
0 0 0
first input is true and second input is
false. Truth table of AND gate is 0 1 0
The given function gives an output only 1 0 1
when x1=1 and x2=0. 1 1 0
The weights have to be decided only after
analysis.
The network architecture is
From the calculated net input, it is not possible to fire the neuron
with input (1,0) only.
From the calculated net inputs, now it is possible to fire the
neuron for input (1, 0) only by fixing a threshold of 1.
Example 6: Implement XOR function using McCulloch-Pitts
neuron (consider binary data).
Solution: The truth table for XOR function is
In this case, the output is "ON“ for only x1 x2 y
odd number of 1's. For the rest, it is OFF." 0 0 0
XOR function cannot be represented by 0 1 1
simple and single logic function; it is 1 0 1
represented as
1 1 0
A single layer net is not sufficient to represent it, we require an
intermediate layer. An intermediate layer is necessary,
x1 x2 y
0 0 0
0 1 0
Truth table for z1 gate is: 1 0 1
Hence, it is not possible to obtain activation 1 1 0
function using this weights.
For this weight, it is possible to get the desired output. Hence,
x1 x2 y
0 0 0
Truth table for z2 gate is: 0 1 1
Hence, it is not possible to obtain activation 1 0 0
function using this weights.
1 1 0
For this weight, it is possible to get the desired output. Hence,
Third Function: y = z1 OR z2. The truth table is
x1 x2 z1 z2 y Here the net input is calculated
0 0 0 0 0 as,
yin = z1v1 + z2v2
0 1 0 1 1
1 0 1 0 1
1 1 0 0 0
Assume both the weights excitatory. i.e. v1 = v2 = 1
Setting the threshold 𝜃 ≥ 1 𝑎𝑛𝑑 𝑣1 = 𝑣2 = 1

Thus, the weights are obtained as following for the XOR function,
w11 = w22 = 1 (excitatory)
w12 = w21 = −1 (inhibitory)
v1 = v2 = 1 (excitatory)
Example 7: Implement OR function using McCulloch-Pitts
Neuron (take bipolar data).
SOLUTION: x1 x2 y
Truth table of OR gate is
In McCulloch-Pitts Neuron only analysis
-1 -1 -1
is performed. Hence, assume weights -1 1 1
be w1 = w2 = 1. 1 -1 1
The network architecture is 1 1 1
Yin = x1*w1 + x2*w2
(-1,-1) – Yin = (-1)*1 + (-1)*1 = - 2
(-1,1) – Yin = -1*1 + 1*1 = 0
(1,-1) – Yin = 1*1 + (-1)*1 = 0
(1,1) – Yin = -1*1 + 1*1 = 2
For this weight, it is possible to get the desired output. Here,
output will be 1 if the net input is greater than or equal to 0.
So, the threshold is 0.
Y = 1 ; Yin >= 0
0 ; Yin < 0
Linear Separability
• Separation of the input space into regions is based on whether
the network response is positive or negative
• Line of separation is called linear-separable line.

• Examples:-
– AND function & OR function are linear separable.
– XOR function is Linearly Non-separable.
OR function with binary data:
output y = 1 ; if yin >=1
= 0 ; if yin < 1
yin = x1w1 + x2w2 and w1 = w2 = 1, so yin = x1 + x2
here y = 1 if x1 + x2 >= 1 else it is 0

X2
(0,1) (1,1)

X1
(0,0) (1,0)
AND function:
output y = 1 ; if yin >= 2
= 0 ; if yin < 2
yin = x1w1 + x2w2 and w1 = w2 = 1, so yin = x1 + x2
here y = 1, if x1 + x2 >= 2 else it is 0

X2
(0,1) (1,1)

X1
(0,0) (1,0)
XOR function:
Output is 1 if input is either (0,1) or (1,0) and it is 0, if input is (0,0)
or (1,1).
Linearly Non-separable function.

X2
(1,1)
(0,1)

X1
(0,0) (1,0)
Example 7: Implement 3-input OR function using McCulloch-Pitts
Neuron (take binary data).
SOLUTION:
Lets assume weights be w1 = w2 = w3 = 1. x1 x2 x3 y
The network architecture is
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
y = 1 ; if Yin >= 1 1 1 0 1
Where Yin = x1w1 + x2w2 + x3w3
i.e. Yin = x1 + x2 + x3 1 1 1 1
3-input OR function:
x1 x2 x3 y x1 + x2 + x3 ≥ 1
0 0 0 0 X2
0 0 1 1 (0,1,0)
0 1 0 1 (1,1,0)
0 1 1 1
1 0 0 1 (0,1,1) (1,1,1)
1 0 1 1 X1
(0,0,0)
1 1 0 1 (1,0,0)
(0,0,1)
1 1 1 1
(1,0,1)
X3
Characterization of Neural Network
➢ Activation Function
– Function to compute output signal from input signal
➢ Architecture
– a pattern of connections between neurons
• Single Layer Feed-forward
• Multilayer Feed-forward
• Recurrent
➢ Strategy / Learning Algorithm
– a method of determining the connection weights
• Supervised
• Unsupervised
• Reinforcement
Learning
➢ Two broad kinds of learning in ANNs is :
i) Parameter learning – updates connecting weights in a
neural net.
ii) Structure learning – focus on change in the network.
➢ Apart from these, learning in ANN is classified into three
categories as
i) supervised learning
ii) unsupervised learning
iii) reinforcement learning
你好吗 ਤੁਸੀ ਕਿਵੇਂ ਹੋ Қалайсыз ਤੁਸੀਂ ਿੀ ਿਰਦੇ ਹੋ
ਤੁਹਾਡਾ ਨਾਮ ਿੀ
сенің атың кім 你叫什么名 сен не істейсің
ਹੈ 字
你做什么 ਿੀ ਤੁਸੀਂ ਸਿੂਲ сен мектепке
ਜਾਂਦੇ ਹੋ 你去学校吗
祝你有美好的 ਤੁਹਾਡਾ ਕਦਨ күніңіз жақсы қайырлы таң
一天 ਚੰ ਗਾ ਬੀਤੇ өтсін

1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
你好吗 ਤੁਸੀ ਕਿਵੇਂ ਹੋ Қалайсыз ਤੁਸੀਂ ਿੀ ਿਰਦੇ ਹੋ
ਤੁਹਾਡਾ ਨਾਮ ਿੀ
сенің атың кім 你叫什么名 сен не істейсің
ਹੈ 字
你做什么 ਿੀ ਤੁਸੀਂ ਸਿੂਲ сен мектепке
ਜਾਂਦੇ ਹੋ 你去学校吗
祝你有美好的 ਤੁਹਾਡਾ ਕਦਨ күніңіз жақсы қайырлы таң
一天 ਚੰ ਗਾ ਬੀਤੇ өтсін

1 2 3 4 Group 1: 1, 7, 9, 12, 13
5 6 7 8 Group 2: 2, 4, 5, 10, 14
9 10 11 12 Group 3: 3, 6, 8, 11, 15, 16
13 14 15 16
你好吗 ਤੁਸੀ ਕਿਵੇਂ ਹੋ Қалайсыз кешкі жұлдыз
你去学校吗 ਤੁਹਾਡਾ ਨਾਮ ਿੀ сенің атың
ਹੈ кім
你做什么 ਿੀ ਤੁਸੀਂ ਸਿੂਲ сен
你有笔吗
ਜਾਂਦੇ ਹੋ мектепке
祝你有美好的 ਤੁਹਾਡਾ ਕਦਨ күніңіз
一天 ਚੰ ਗਾ ਬੀਤੇ жақсы өтсін
你叫什么名字 ਤੁਸੀਂ ਿੀ ਿਰਦੇ ਹੋ сен не
істейсің ਕੀ ਤੁਹਾਡੇ ਕੋਲ
ਕਲਮ ਹੈ
Chinese Punjabi Kazakh
你好吗 ਤੁਸੀ ਕਿਵੇਂ ਹੋ Қалайсыз кешкі жұлдыз
你去学校吗 ਤੁਹਾਡਾ ਨਾਮ ਿੀ сенің атың Kazakh
ਹੈ кім
你做什么 ਿੀ ਤੁਸੀਂ ਸਿੂਲ сен
你有笔吗
ਜਾਂਦੇ ਹੋ мектепке
祝你有美好的 ਤੁਹਾਡਾ ਕਦਨ күніңіз Chinese
一天 ਚੰ ਗਾ ਬੀਤੇ жақсы өтсін
你叫什么名字 ਤੁਸੀਂ ਿੀ ਿਰਦੇ ਹੋ сен не
істейсің ਕੀ ਤੁਹਾਡੇ ਕੋਲ
ਕਲਮ ਹੈ
Chinese Punjabi Kazakh Punjabi
Supervised learning

• Learning with the help of a teacher.


• Example : learning process of a small child.
– Child doesn’t know read/write.
– Their each & every action is supervised by a teacher
• In ANN, each input vector requires a corresponding target
vector, which represents the desired output.
• The input vector along with target vector is called training pair.
• The input vector results in output vector.
• The actual output vector is compared with desired output
vector.
• If there is a difference means an error signal is generated by the
network.
• It is used for adjustment of weights until actual output matches
desired output.
Training
data

Colour Red(250) Blue(10) Yellow(150) Green(100)


No. of edges 4 0 3 5

Test Input Colour: 240 Edge: 4

Result: square
Unsupervised learning

• Learning is performed without the help of a teacher.


• In ANN, during training process, network receives input
patterns and organize it to form clusters.
• From the Fig. it is observed that no feedback is applied from
environment to inform what output should be or whether they
are correct.
• The network itself discover patterns, regularities, features/
categories from the input data and relations for the input data
over the output.
• Exact clusters are formed by discovering similarities &
dissimilarities so called as self – organizing.
Reinforcement learning
• Similar to supervised learning.
• Information is supplied as to whether the network outputs are
good or bad, but again no actual desired values are given.
• Learning from interaction with an environment to achieve some
long-term goal, that is related to the state of the environment.
• The goal is defined by reward signal, which must be maximized.
• Agent must be able to partially/fully sense the environment state
and take actions to influence the environment state.
• The state is typically described with a feature-vector.
Activation Functions
Activation Functions:
➢ An Activation Function decides whether a neuron should be
activated or not.
➢ The role of the Activation Function is to derive output from a set
of input values fed to a node.
➢ It calculates the weighted sum of the inputs, add directions and
decides whether to fire particular neuron or not.
➢ If we do not apply activation function, then the output signal
would simple be a simple linear function.
➢ It will be just a polynomial of one degree.
➢ Neural network without activation function would simply be a
linear regression model, which has limited power and would not
be able to learn and model complicated types of data such as
images, videos, audios etc.
1. Identity function: It is a linear function which is defined as

2. Binary step function: This function can be defined as

Where, θ represents threshold value. It is used in single layer


networks to convert the net input to an output that is binary (0 or
1).
3. Bipolar step function: This function can be defined as

Where, θ represents threshold value. It is used in single layer


networks to convert the net input to an output that is bipolar (+1 or -
1).
4. Sigmoid function:
It is used in Back propagation networks. Two types:
a) Binary sigmoid function: It is also termed as logistic sigmoid
function or unipolar sigmoid function. It is defined as

where, λ represents
steepness parameter. The
range of sigmoid function is
0 to 1.
Derivative of the function:
b) Bipolar sigmoid function: This function is defined as

𝞴𝑥
= tanh
2

Derivative of the function:


5. Ramp function: The ramp function is defined as
Types of Neural Network Architectures

1. Single layer feed forward network


2. Multilayer feed-forward network
3. Single node with its own feedback
4. Single-layer recurrent network
5. Multilayer recurrent network
Single layer Feed-forward Network / A simple Perceptron network

Input layer Output layer


➢ In this type of network, we have only two layers, i.e. input layer
and output layer but the input layer does not count because no
computation is performed in this layer.
➢ Inputs and outputs are linked with each other.
➢ Inputs are connected to the processing nodes with various
weights.
➢ Output Layer is formed when different weights are applied on
input nodes and the cumulative effect per node is taken.
➢ This is called single layer feed-forward network.
➢ It is single layer since output layer alone performs the
computation. And it is feed forward in type or acyclic in nature.
Multi-layer Feed-forward Network

U V W
➢ This network is formed by the interconnection of several layers.
➢ Input layer receives input and buffers input signal.
➢ Output layer generated output.
➢ Layer between input and output is called hidden layer.
➢ Hidden layer is internal to the network.
➢ There can be one to several hidden layers in a network.
➢ More the hidden layer more is the complexity of network, but
efficient output is produced.
➢ The existence of one or more hidden layers enables the
network to be computationally stronger.
Advantages of Feed Forward Neural Networks
1. Less complex, easy to design & maintain
2. Fast and speedy [One-way propagation]
3. Highly responsive to noisy data
Disadvantages of Feed Forward Neural Networks:
1. Cannot be used for deep learning [due to absence of dense
layers and back propagation]
Single node with own feedback

➢ When outputs can be directed back as inputs to the same layer


or proceeding layer nodes, then it results in feedback
networks.
➢ Recurrent networks are feedback networks with closed loop.
The figure above a single neuron with feedback to itself.
Single layer recurrent Network
➢ This network is a single-layer network with a feedback
connection in which the processing element's output can be
directed back to itself or to other processing elements or both.
➢ A recurrent neural network is a class of artificial neural
network where the connection between nodes forms a
directed graph along a sequence.
➢ This allows it to exhibit dynamic temporal behavior for a time
sequence.
➢ Unlike feed-forward neural networks, RNNs can use their
internal state (memory) to process sequences of inputs.
Multi-layer recurrent network
Multilayer Recurrent Network
➢ In this type of network, processing element output can be
directed to the processing element in the same layer and in the
preceding layer forming a multilayer recurrent network.
➢ They perform the same task for every element of the sequence,
with the output being dependent on the previous computations.
➢ Inputs are not needed at each time step.
➢ The main feature of a multilayer recurrent network is its hidden
state, which captures information about a sequence.
Advantages of Recurrent Neural Networks
1. Model sequential data where each sample can be assumed to
be dependent on historical ones is one of the advantage.
Disadvantages of Recurrent Neural Networks
1. Gradient vanishing and exploding problems
2. Training recurrent neural nets could be a difficult task
3. Difficult to process long sequential data using ReLU as an
activation function.
Applications of ANN:
➢ Air traffic control could be automated with the location, altitude,
direction and speed of each radar blip taken as input to the
network. The output would be the air traffic controller's instruction
in response to each blip.
➢ Animal behavior, predator/prey relationships and population cycles
may be suitable for analysis by neural networks.
➢ Appraisal and valuation of property, buildings, automobiles,
machinery, etc. should be an easy task for a neural network.
➢ Betting on horse races, stock markets, sporting events, etc. could be
based on neural network predictions.
➢ Weather prediction may be possible. Inputs would include weather
reports from surrounding areas. Output (s) would be the future
weather in specific areas based on the input information.
➢ Criminal sentencing could be predicted using a large sample of
crime details as input and the resulting sentences as output.
➢ Complex physical and chemical processes that may involve
the interaction of numerous (possibly unknown)
mathematical formulas could be ·modeled heuristically using a
neural network.
➢ Data mining, cleaning and validation could be achieved by
determining which records suspiciously diverge from the pattern
of their peers.
➢ Direct mail advertisers could use neural network analysis of
their databases to decide which customers should be
targeted, and avoid wasting money on unlikely targets.
➢ Echo patterns from sonar, radar, seismic and magnetic
instruments could be used to predict their targets.
➢ Econometric modeling based on neural networks should be
more realistic than older models based on classical statistics.
➢ Employee hiring could be optimized if the neural networks
were able to predict which job applicant would show the best
job performance.
➢ Expert consultants could package their intuitive expertise into a
neural network to automate their services.
➢ Fraud detection regarding credit cards, insurance or taxes could
be automated using a neural network analysis of past incidents.
➢ Handwriting and typewriting could be recognized by imposing a
grid over the writing, then each square of the grid becomes an
input to the neural network. This is called OCR
➢ Lake water levels could be predicted based upon precipitation
patterns and river/dam flows.
➢ Machinery control could be automated by capturing the actions of
experienced machine operators into a neural network.
➢ Medical diagnosis is an ideal application for neural networks.
➢ Medical research relies heavily on classical statistics to analyze
research data. Perhaps a neural network should be included in
the researcher's tool kit.
➢ Music composition has been tried using neural networks. The
network is trained to recognize patterns in the pitch and tempo of
certain music, and then the network writes its own music.
➢ Photos and fingerprints could be recognized by imposing a fine
grid over the photo. Each square of the grid becomes an input to
me neural network
Voice recognition system:
Single layer Feed-forward Network

W11 y1
x1 X1 Y1
W12
W1n
W21
W22 y2
x2 X2 Y2
W23

Wn1 Wn2
yn
xn Xn Wnn Yn
Multi-layer Feed-forward Network
Hidden Layers
Input Layer Output Layer
y1
x1 X1 R1 S1 Y1

y2
x2 X R2 S2 Y2
2

yn
x n Xn Rn Sn Yn
Multi-layer Feed-forward Network

x1 X1 Z1

x2 X Y y
2 Z2

Zn
x n Xn
1
Single layer recurrent neural Network

W11 y1
x1 X1 Y1

W22 y2
x2 X2 Y2

yn
xn Xn Wnn Yn
Multi-layer recurrent network

y1
x1 X1 R1 Y1

y2
x2 X R2 Y2
2

yn
x n Xn Rn Yn

You might also like