Introduction To Artificial Neural Networks
Introduction To Artificial Neural Networks
Neural Networks
Perceptron Learning Algorithm
• First neural network learning model in the 1960’s
• Simple and limited (single layer models)
• Basic concepts are similar for multi-layer models so this is a good
learning tool
• Still used in current applications (modems, etc.)
CS 472 - Perceptron 23
Perceptron Node – Threshold Logic Unit
x1 w1
x2 w2 𝜃q z
xn wn
n
1 if åx w ³q
i =1
i i
z= n
0 if åx w <q
i =1
i i
CS 472 - Perceptron 24
Perceptron Node – Threshold Logic Unit
x1 w1
x2 w2 𝜃 z
xn wn
n
• Learn weights such that an objective 1 if åx w ³qi i
function is maximized. z=
i =1
n
• What objective function should we use?
• What learning algorithm should we
0 if åx w <q
i =1
i i
use?
CS 472 - Perceptron 25
Perceptron Learning Algorithm
x1 .4
.1 z
x2 -.2
n
x1 x2 t
1 if åx w ³qi i
.8 .3 1 z=
i =1
n
.4 .1 0 0 if åx w <q
i =1
i i
CS 472 - Perceptron 26
First Training Instance
.8 .4
.1 z =1
n
x1 x2 t
1 if åx w ³qi i
.8 .3 1 z=
i =1
n
.4 .1 0 0 if åx w <q
i =1
i i
CS 472 - Perceptron 27
Second Training Instance
.4 .4
.1 z =1
n
x1 x2 t
1 if åx w ³qi i
.8 .3 1 z=
i =1 Dwi = (t - z) * c * xi
n
.4 .1 0 0 if åx w <q
i =1
i i
CS 472 - Perceptron 28
Three-layer networks
x1
x2
Input
Output
xn
Hidden layers
29
Properties of architecture
m
y i
= f (å w ij
x j
bi )
j= 1
30
Multi-layered neural network
• Input layer: The ANNs initial layer, known as the input layer. The primary
role of this layer is to provide input data.
• Hidden layer: Processes the input data. This layer is located between the
input and output layers. According to the complex nature of the problem, it
defines the total number of hidden layers with how many neurons should
be there in each layer. Every neuron in the hidden layer gets input from the
layer below and transforms it nonlinearly before passing it to the next
layer.
• Output layer: The ANNs last layer, the output layer, generates the output
depending on the input data. The specific kind of problem that is being
addressed, determines how many neurons are used in the output layer.
Multi-layered neural network
• Forward propagation
• starts once the ANN has received the input data.
• The weights of the neurons in the first hidden layer are multiplied with the
input data; then the output is sent via the activation method of this first
hidden layer.
• The entire process is repeated toward the output layer with the result from
the first hidden layer used as the input for the next hidden layer.
Multi-layered neural network
• Backpropagation
• the desired output has been compared with the network's output.
• The difference between both is used to identify the error.
• Backpropagation is then used to propagate the error back through the
network.
• The weights of connections among the neurons are altered during
backpropagation in order to reduce error.
• Stochastic gradient descent, adaptive learning rate and mini-batch gradient
descent are the backpropagation algorithms which are used to update the
weights, enhance the network's convergence speed and accuracy.