0% found this document useful (0 votes)
12 views10 pages

Artificial Neural Networks - 2

The document discusses the Perceptron model, which combines the McCulloch-Pitts neuron concept and Hebbian learning rule to create an artificial neuron that learns from errors. It outlines the learning process, including weight updating and the Perceptron Convergence Theorem, ensuring a solution is found in finite time if it exists. Additionally, it provides examples of training a 3-input perceptron and generating training epochs from given data.

Uploaded by

mous7457
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views10 pages

Artificial Neural Networks - 2

The document discusses the Perceptron model, which combines the McCulloch-Pitts neuron concept and Hebbian learning rule to create an artificial neuron that learns from errors. It outlines the learning process, including weight updating and the Perceptron Convergence Theorem, ensuring a solution is found in finite time if it exists. Additionally, it provides examples of training a 3-input perceptron and generating training epochs from given data.

Uploaded by

mous7457
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Lecture 4.

• Perceptron Model

Dr. Mainak Biswas


Perceptron Model

Dr. Mainak Biswas


Perceptron
• Perceptron model is essentially a combination of
the McCulloch-Pitts neuron concept (the basic
structure of an artificial neuron with a threshold
function) and the Hebbian learning rule
• “Neurons that fire together, wire together.” —
Hebb’s Law
– Perceptrons are trained using a variant of this rule
that considers the network’s errors and strengthens
connections that help minimize these errors
– Train the Perceptron one instance at a time
– For each instance, the Perceptron makes predictions

Dr. Mainak Biswas


Perceptron Rule Learning
𝒘𝒊 = 𝒄 𝒕 – 𝒛 𝒙𝒊
Where 𝑤𝑖 is the weight from input 𝑖 to perceptron node, 𝑐 is
the learning rate, 𝑡 is the target for the current instance, 𝑧 is
the current output, and 𝑥𝑖 is i-th input
• Least perturbation principle
– Only change weights if there is an error
– small c rather than changing weights sufficient to make current
pattern correct
• Create a perceptron node with n inputs
• Iteratively apply a pattern from the training set and apply
the perceptron rule
• Each iteration through the training set is an epoch
• Continue training until total training set error ceases to
improve
• Perceptron Convergence Theorem: Guaranteed to find a
solution in finite time if aDr.solution
Mainak Biswas exists
Perceptron Rule Summary
• Summation
𝑛

𝑔 𝑥1 , 𝑥2 , … , 𝑥𝑛 = 𝑔 𝒙 = 𝑤𝑖 𝑥𝑖
𝑖=1
• Threshold
1 𝑖𝑓 𝑔(𝑥) ≥ 𝜃
𝑦=𝑓 𝑔 𝒙 =
0 𝑔(𝑥) < 𝜃

• Weight updating
𝑤𝑖 = 𝑤𝑖 + 𝑐 𝑡 − 𝑓(𝑔(𝑥) 𝑥𝑖
Dr. Mainak Biswas
Perceptron Rule Example
• Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0)

• Assume a learning rate c of 1 and initial weights all 0 :


∆𝒘𝒊 = 𝒄(𝒕 – 𝒛) 𝒙𝒊
• Training set
X3 X2 X1 X0 Y
(Bias)
X W XW
0 0 1 1 0 0 0 0
1 1 1 1 1 0 0 0
1 0 0
1 0 1 1 1
1 0 0
0 1 1 1 0 OUTPUT : 0
Pattern Target Weight Vector Net Output DW
001 1 0 0000

CS 472 - Perceptron 6
Example
• Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0)

• Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi


• Training set: X3 X2 X1 Y X3 X2 X1 X0 Y
0 0 1 0 0 0 1 1 0
1 1 1 1 Add 1 1 1 1 1
1 0 1 1
Bias 1 0 1 1 1
0 1 1 0 0 1 1 1 0

Pattern Target Weight Vector Net Output DW


001 1 0 0000 0 0 0 0 0 0
111 1 1 0000
X W net= XW DEL (W)
1 0 0 1 ERROR=+1
1 0 0 1
1 0 0 1
1 0 0 1
OUTPUT : 0
CS 472 - Perceptron 7
Example
• Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0)
• Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi
• Training set
X3 X2 X1 X0 Y
0 0 1 1 0
1 1 1 1 1
1 0 1 1 1
0 1 1 1 0

Pattern Target Weight Vector Net Output DW


001 1 0 0000 0 0 0 0 0 0
111 1 1 0000 0 0 1 1 1 1
101 1 1 1111

CS 472 - Perceptron 8
Training
• Assume a 3 input perceptron plus bias (it outputs 1 if net > 0, else 0)
• Assume a learning rate c of 1 and initial weights all 0: Dwi = c(t – z) xi
• Training set
X3 X2 X1 X0 Y
0 0 1 1 0
1 1 1 1 1
1 0 1 1 1 0 1 1 1-> ?
0 1 1 1 0 1000
Pattern Target Weight Vector Net Output DW ______
001 1 0 0000 0 0 0 0 0 0
EPOCH 1

111 1 1 0000 0 0 1 1 1 1 0000=net=0


101 1 1 1111 3 1 0 0 0 0
011 1 0 1111 3 1 0 -1 -1 -1
001 1 0 1000 0 0 0 0 0 0
EPOCH 2

111 1 1 1000 1 1 0 0 0 0
101 1 1 1000 1 1 0 0 0 0
011 1 0 1000 0 0 0 0 0 0

CS 472 - Perceptron 9
Assignment
• Generate the training epochs using
perceptron model from the training data
0 1 10 -> 1
1 0 1 1-> 0
1 0 0 1-> 1
0 1 0 0-> 0

You might also like