0% found this document useful (0 votes)
14 views51 pages

NN Lec - 01

The document discusses neural networks and biological neurons. It provides details on how biological neurons transmit signals and compares them to artificial neural networks. The document then discusses the McCulloch-Pitts neuron model, including its activation function and using it to implement logic functions like AND and XOR.

Uploaded by

Zeyad Gomaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views51 pages

NN Lec - 01

The document discusses neural networks and biological neurons. It provides details on how biological neurons transmit signals and compares them to artificial neural networks. The document then discusses the McCulloch-Pitts neuron model, including its activation function and using it to implement logic functions like AND and XOR.

Uploaded by

Zeyad Gomaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Neural Networks

Lecture (1)

Dr. Mona Nagy ElBedwehy


Information Flow in Nervous System
➢ Our brain uses the extremely large interconnected network of
neurons for information processing and to model the world
around us. Simply put, a neuron collects inputs from other
neurons using dendrites. The neuron sums all the inputs and if
the resulting value is greater than a threshold, it fires. The
fired signal is then sent to other connected neurons through the
axon.

Dr. Mona Nagy ElBedwehy 2


Biological Networks
1. The majority of neurons encode their outputs or activations as
a series of brief electrical pulses.
2. Dendrites are the receptive zones that receive activation from
other neurons.
3. The cell body (soma) of the neuron’s processes the incoming
activations and converts them into output activations.

Dr. Mona Nagy ElBedwehy 3


Biological Networks
4. Axons are transmission lines that send activation to other
neurons.
5. Synapses allow weighted transmission of signals (using
neurotransmitters) between axons and dendrites to build up
large neural networks.

Dr. Mona Nagy ElBedwehy 4


Biological Vs Artificial Neural
Networks
BIOLOGICAL VS ARTIFICIAL NEURAL
NETWORKS

Dr. Mona Nagy ElBedwehy 6


What is Artificial Neural Network?
➢ Artificial Neural Network (ANN) have the same basic
components as biological neurons.
➢ ANN are constructed and implemented to model the human
brain.
➢ ANN possess a large number of processing elements called
nodes/neurons which operate in parallel.
➢ A neuron is a basic processing unit in a neural network.
➢ The neurons are connected with others by connection link.
➢ Each link is associated with weights which contain information
about the input signal.

Dr. Mona Nagy ElBedwehy 7


What is Artificial Neural Network?
➢ Example: you try to decide if you should go to a concert.

Is the artist good? Is the weather good?

What weights should these facts have?

Dr. Mona Nagy ElBedwehy 8


What is Artificial Neural Network?
➢ An artificial neuron is a mathematical function based on a
model of biological neurons, where each neuron takes inputs,
weighs them separately, sums them up and passes this sum
through a nonlinear function to produce output.

➢ A neuron is a node that processes all fan-in from other nodes


and generates an output according to a transfer function called
the activation function.

Dr. Mona Nagy ElBedwehy 9


What is a Neural Network?
➢ Imagine that you want to learn how to translate text from
English into Spanish.
You learn languages by memorizing words and phrases, their
meanings, and the context in which they are used. Based on this
experience, you will be able to translate new texts that you have
never seen before.
➢ Another case is the classification of cats and dogs. Just as a
person learns to distinguish them from examples seen in life.
➢ So, ANN can learn to distinguish them from such examples.

Dr. Mona Nagy ElBedwehy 10


What is a Neural Network?

➢ The ANN does something similar.

➢ It learns from examples - it can be texts, images, sounds, any


data that we want it to process.
➢ The ANN, just like a person learns a language, tries to identify
patterns in this data.

Dr. Mona Nagy ElBedwehy 11


What is a Neural Network?
➢ ANN uses these patterns to perform tasks such as classification
(determining which category an object belongs to) or regression
(predicting a numerical value such as the price of a house).
➢ This process of training a neural network using examples is
called supervised learning.
➢ ANN are constructed and implemented to model the human
brain.

Dr. Mona Nagy ElBedwehy 12


The McCulloch–Pitts Neuron Model
➢ Researchers Warren McCullock and Walter Pitts published their
first concept of simplified brain cell in 1943.
➢ This was called McCullock-Pitts (MCP) neuron.
➢ They described such a nerve cell as a simple logic gate with
binary outputs.
➢ Multiple signals arrive at the dendrites and are then integrated
into the cell body, and, if the accumulated signal exceeds a
certain threshold, an output signal is generated that will be
passed on by the axon.

Dr. Mona Nagy ElBedwehy 13


The McCulloch–Pitts Neuron Model

Dr. Mona Nagy


ElBedwehy
ElBedwehy 14
The McCulloch–Pitts Neuron Model
• The MCP is first mathematical model
of biological neuron.
• Two possible states of neuron 1 for
active or 0 for not active.

• Has both positive and negative weights.

• The threshold plays a major role in the


MCP neuron. If the input is greater than
the sum of the input, the neuron fires.

• Mostly used in logic functions.

• Activation function is Binary (neuron may fire or may not fire).

Dr. Mona Nagy ElBedwehy 15


The McCulloch–Pitts Neuron Model
➢ Activation function for MCP neuron:
𝟎 𝐢𝐟 𝒚𝒊𝒏 < 𝜽
𝒇 𝒚𝒊𝒏 =ቐ
𝟏 𝐢𝐟 𝒚𝒊𝒏 ≥ 𝜽
And 𝜽 ≥ 𝒏𝒘 − 𝒑. (𝒑 is inhibitory weights).

Dr. Mona Nagy ElBedwehy 16


The McCulloch–Pitts Neuron Model
Example (1) Implement AND logic function using McCulloch-Pitts
neuron.

Dr. Mona Nagy ElBedwehy 17


The McCulloch–Pitts Neuron Model
➢ Consider the truth table for AND logic function.
➢ The MCP neuron has no particular training algorithm.
➢ In MCP neuron, only analysis is being performed.
➢ Hence, assume the weights be 𝒘𝟏 = 𝒘𝟐 = 𝟏. Inputs Target
𝒙𝟏 𝒙𝟏 𝒙𝟐 𝒚
1 1 1
𝒚𝒊𝒏 𝒚 1 0 0
0 1 0
𝒙𝟐
0 0 0

Dr. Mona Nagy ElBedwehy 18


The McCulloch–Pitts Neuron Model
𝟏, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 = 𝟏 × 𝟏 + 𝟏 × 𝟏 = 𝟐 Inputs Target
𝟏, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 = 𝟏 × 𝟏 + 𝟎 × 𝟏 = 𝟏 𝒙𝟏 𝒙𝟐 𝒚
𝟎, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 = 𝟎 × 𝟏 + 𝟏 × 𝟏 = 𝟏 1 1 1

𝟎, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏 + 𝒙𝟐 𝒘𝟐 = 𝟎 × 𝟏 + 𝟎 × 𝟏 = 𝟎 1 0 0
0 1 0
➢ The threshold value is set equal to 2.
0 0 0
➢ This can also be obtained by
𝜽 ≥ 𝒏𝒘 − 𝒑
➢ Here, 𝒏 = 𝟐, 𝒘 = 𝟏 and 𝒑 = 𝟎 (no inhibitory weights).
∴ 𝜽 ≥𝟐×𝟏−𝟎= 𝟐

Dr. Mona Nagy ElBedwehy 19


The McCulloch–Pitts Neuron Model
➢ Thus, the output of neuron can be written:

𝟎 𝐢𝐟 𝒚𝒊𝒏 < 𝟐
𝒚 = 𝒇(𝒚𝒊𝒏 ) = ቐ
𝟏 𝐢𝐟 𝒚𝒊𝒏 ≥ 𝟐

Dr. Mona Nagy ElBedwehy 20


The McCulloch–Pitts Neuron Model
Example (2) Implement XOR logic function using McCulloch-Pitts
neuron.

Dr. Mona Nagy ElBedwehy 21


The McCulloch–Pitts Neuron Model
➢ Consider the truth table for XOR logic function.
➢ The XOR function cannot be represented by simple and single
logic function; it is represented as: Inputs Target
𝒚 = 𝒙𝟏 𝒙𝟐 + 𝒙𝟏 𝒙𝟐 𝒙𝟏 𝒙𝟐 𝒚
1 1 0
Suppose 𝒛𝟏 = 𝒙𝟏 𝒙𝟐 and 𝒛𝟐 = 𝒙𝟏 𝒙𝟐.
1 0 1
𝒚 = 𝒛𝟏 + 𝒛𝟐
0 1 1
𝒚 = 𝒛𝟏 𝑶𝑹 𝒛𝟐 0 0 0
➢ A single-layer is not sufficient to represent the XOR function.
➢ So, we need to add an intermediate layer is necessary.

Dr. Mona Nagy ElBedwehy 22


The McCulloch–Pitts Neuron Model
First function 𝒛𝟏 = 𝒙𝟏 𝒙𝟐 Inputs Target
Hence, assume the weights be 𝒘𝟏𝟏 = 𝒘𝟐𝟏 = 𝟏. 𝒙𝟏 𝒙𝟐 𝒛𝟏
𝟏, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟏 × 𝟏 + 𝟏 × 𝟏 = 𝟐 1 1 0

𝟏, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟏 × 𝟏 + 𝟎 × 𝟏 = 𝟏 1 0 1

𝟎, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟎 × 𝟏 + 𝟏 × 𝟏 = 𝟏 0 1 0

𝟎, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟎 × 𝟏 + 𝟎 × 𝟏 = 𝟎 0 0 0

Hence, it is not possible to obtain function 𝒛𝟏 using these weights.

Dr. Mona Nagy ElBedwehy 23


The McCulloch–Pitts Neuron Model
Hence, assume the weights be 𝒘𝟏𝟏 = 𝟏, 𝒘𝟐𝟏 = −𝟏. Inputs Target
𝟏, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟏 × 𝟏 + 𝟏 × −𝟏 = 𝟎 𝒙𝟏 𝒙𝟐 𝒛𝟏
𝟏, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟏 × 𝟏 + 𝟎 × −𝟏 = 𝟏 1 1 0
𝟎, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟎 × 𝟏 + 𝟏 × −𝟏 = −𝟏 1 0 1

𝟎, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟏 + 𝒙𝟐 𝒘𝟐𝟏 = 𝟎 × 𝟏 + 𝟎 × −𝟏 = 𝟎 0 1 0


0 0 0
➢ The threshold value is set equal to 1.
➢ This can also be obtained by 𝜽 ≥ 𝒏𝒘 − 𝒑
➢ Here, 𝒏 = 𝟐, 𝒘 = 𝟏 and 𝒑 = 𝟏.
∴𝜽≥𝟐×𝟏−𝟏=𝟏

Dr. Mona Nagy ElBedwehy 24


The McCulloch–Pitts Neuron Model
First function 𝒛𝟐 = 𝒙𝟏 𝒙𝟐 Inputs Target
Hence, assume the weights be 𝒘𝟏𝟐 = 𝒘𝟐𝟐 = 𝟏. 𝒙𝟏 𝒙𝟐 𝒛𝟐
𝟏, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟏 × 𝟏 + 𝟏 × 𝟏 = 𝟐 1 1 0

𝟏, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟏 × 𝟏 + 𝟎 × 𝟏 = 𝟏 1 0 0

𝟎, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟎 × 𝟏 + 𝟏 × 𝟏 = 𝟏 0 1 1

𝟎, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟎 × 𝟏 + 𝟎 × 𝟏 = 𝟎 0 0 0

Hence, it is not possible to obtain function 𝒛𝟐 using these weights.

Dr. Mona Nagy ElBedwehy 25


The McCulloch–Pitts Neuron Model
Hence, assume the weights be 𝒘𝟏𝟐 = −𝟏, 𝒘𝟐𝟐 = 𝟏. Inputs Target
𝟏, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟏 × −𝟏 + 𝟏 × 𝟏 = 𝟎 𝒙𝟏 𝒙𝟐 𝒛𝟐
𝟏, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟏 × −𝟏 + 𝟎 × 𝟏 = −𝟏 1 1 0
𝟎, 𝟏 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟎 × −𝟏 + 𝟏 × 𝟏 = 𝟏 1 0 0

𝟎, 𝟎 , 𝒚𝒊𝒏 = 𝒙𝟏 𝒘𝟏𝟐 + 𝒙𝟐 𝒘𝟐𝟐 = 𝟎 × −𝟏 + 𝟎 × 𝟏 = 𝟎 0 1 1


0 0 0
➢ The threshold value is set equal to 1.
➢ This can also be obtained by 𝜽 ≥ 𝒏𝒘 − 𝒑
➢ Here, 𝒏 = 𝟐, 𝒘 = 𝟏 and 𝒑 = 𝟏.
∴𝜽≥𝟐×𝟏−𝟏=𝟏

Dr. Mona Nagy ElBedwehy 26


The McCulloch–Pitts Neuron Model
➢ Third function 𝒚 = 𝒛𝟏 𝑶𝑹 𝒛𝟐 . 𝒙𝟏 𝒙𝟐 𝒚 𝒛𝟏 𝒛𝟐
𝒚𝒊𝒏 = 𝒛𝟏 𝒘𝟏 + 𝒛𝟐 𝒘𝟐 1 1 0 0 0
➢ Assume the weights be 𝒘𝟏 = 𝒘𝟐 = 𝟏. 1 0 1 1 0
𝟏, 𝟏 , 𝒚𝒊𝒏 = 𝒛𝟏 𝒘𝟏 + 𝒛𝟐 𝒘𝟐 = 𝟎 × 𝟏 + 𝟎 × 𝟏 = 𝟎 0 1 1 0 1

𝟏, 𝟎 , 𝒚𝒊𝒏 = 𝒛𝟏 𝒘𝟏 + 𝒛𝟐 𝒘𝟐 = 𝟏 × 𝟏 + 𝟎 × 𝟏 = 𝟏 0 0 0 0 0

𝟎, 𝟏 , 𝒚𝒊𝒏 = 𝒛𝟏 𝒘𝟏 + 𝒛𝟐 𝒘𝟐 = 𝟎 × 𝟏 + 𝟏 × 𝟏 = 𝟏
𝟎, 𝟎 , 𝒚𝒊𝒏 = 𝒛𝟏 𝒘𝟏 + 𝒛𝟐 𝒘𝟐 = 𝟎 × 𝟏 + 𝟎 × 𝟏 = 𝟎
𝟎 𝐢𝐟 𝒚𝒊𝒏 < 𝟏
𝒚 = 𝒇(𝒚𝒊𝒏 ) = ቐ
𝟏 𝐢𝐟 𝒚𝒊𝒏 ≥ 𝟏
Dr. Mona Nagy ElBedwehy 27
The McCulloch–Pitts Neuron Model

Dr. Mona Nagy ElBedwehy 28


Learning Rule
➢ An artificial neural network's learning rule or learning process
is a method, mathematical logic or algorithm which improves
the network's performance and/or training time.
➢ Usually, this rule is applied repeatedly over the network.

➢ It is done by updating the weights and bias levels of a network


when a network is simulated in a specific data environment.
➢ A learning rule may accept existing conditions (weights and
biases) of the network and will compare the expected result
and actual result of the network to give new and improved
values for weights and bias.

Dr. Mona Nagy ElBedwehy 29


Hebbian Learning Rule
➢ Hebbian Learning Rule, also known as Hebb Learning Rule,
was proposed by Donald O Hebb in 1949 .
➢ It is one of the first and easiest learning rules in the NN.
➢ It is used for pattern classification.
➢ It is a single layer neural network, i.e. it has one input layer
and one output layer.
➢ The input layer can have many units, say 𝒏.
➢ The output layer only has one unit.
➢ Hebbian rule works by updating the weights between neurons
in the neural network for each training sample.
Dr. Mona Nagy ElBedwehy 30
Hebbian Learning Rule
Hebbian Learning Rule Algorithm
1. Set all weights to zero, 𝒘𝒊 = 𝟎 for 𝒊 = 𝟏 𝐭𝐨 𝒏, and bias to zero.
2. For each input vector 𝑺, repeat steps 3-5.
3. Set activations for input units with the input vector 𝑿𝒊 = 𝑺𝒊 for
for 𝒊 = 𝟏 𝐭𝐨 𝒏.
4. Set the desired output value to output neuron, i.e. 𝒚 = 𝒕.
5. Update weight and bias by applying Hebb rule for 𝒊 = 𝟏 𝐭𝐨 𝒏 by:
𝒘𝒊 𝐧𝐞𝐰 = 𝒘𝒊 𝐨𝐥𝐝 + 𝒙𝒊 𝒚
𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚

Dr. Mona Nagy ElBedwehy 31


Hebbian Learning Example
Example (3) Design a Hebb net to implement logical AND function.
A. Using bipolar inputs and output.
B. Using binary inputs and output.

Dr. Mona Nagy ElBedwehy 32


Hebbian Learning Example
Answer (A)
✓ The training data for the AND function. Inputs Target
✓ Set all weights to zero, 𝒘𝒊 = 𝟎 and the 𝒙𝟏 𝒙𝟐 bias 𝒚
bias to zero.
1 1 1 1
✓ Apply the Hebb rule for the first input. 1 −1 1 −1
𝒘𝟏 𝐧𝐞𝐰 = 𝒘𝟏 𝐨𝐥𝐝 + 𝒙𝟏 𝒚 = 𝟎 + 𝟏 × 𝟏 = 𝟏 −1 1 1 −1
𝒘𝟐 𝐧𝐞𝐰 = 𝒘𝟐 𝐨𝐥𝐝 + 𝒙𝟐 𝒚 = 𝟎 + 𝟏 × 𝟏 = 𝟏 −1 −1 1 −1
𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚 = 𝟎 + 𝟏 = 𝟏

Dr. Mona Nagy ElBedwehy 33


Hebbian Learning Example
✓ Apply the Hebb rule for the second input. Inputs Target
𝒘𝟏 𝐧𝐞𝐰 = 𝒘𝟏 𝐨𝐥𝐝 + 𝒙𝟏 𝒚 = 𝟏 + 𝟏 × −𝟏 = 𝟎 𝒙𝟏 𝒙𝟐 bias 𝒚
𝒘𝟐 𝐧𝐞𝐰 = 𝒘𝟐 𝐨𝐥𝐝 + 𝒙𝟐 𝒚 = 𝟏 + −𝟏 −𝟏 = 𝟐 1 1 1 1
𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚 = 𝟏 + −𝟏 = 𝟎 1 −1 1 −1
−1 1 1 −1
✓ Apply the Hebb rule for the Third input.
−1 −1 1 −1
𝒘𝟏 𝐧𝐞𝐰 = 𝒘𝟏 𝐨𝐥𝐝 + 𝒙𝟏 𝒚 = 𝟎 + −𝟏 −𝟏 = 𝟏
𝒘𝟐 𝐧𝐞𝐰 = 𝒘𝟐 𝐨𝐥𝐝 + 𝒙𝟐 𝒚 = 𝟐 + 𝟏 × −𝟏 = 𝟏
𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚 = 𝟎 + −𝟏 = −𝟏

Dr. Mona Nagy ElBedwehy 34


Hebbian Learning Example
✓ Apply the Hebb rule for the fourth input. Inputs Target
𝒘𝟏 𝐧𝐞𝐰 = 𝒘𝟏 𝐨𝐥𝐝 + 𝒙𝟏 𝒚 = 𝟏 + −𝟏 −𝟏 = 𝟐 𝒙𝟏 𝒙𝟐 bias 𝒚
𝒘𝟐 𝐧𝐞𝐰 = 𝒘𝟐 𝐨𝐥𝐝 + 𝒙𝟐 𝒚 = 𝟏 + −𝟏 −𝟏 = 𝟐 1 1 1 1
𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚 = −𝟏 + −𝟏 = −𝟐 1 −1 1 −1
−1 1 1 −1
✓ So, the final weight and bias are 𝟐 𝟐 −𝟐 .
−1 −1 1 −1
✓ Testing the network :
For 𝒙𝟏 = −𝟏, 𝒙𝟐 = −𝟏, 𝒀 = −𝟏 𝟐 + −𝟏 𝟐 + 𝟏 −𝟐 = −𝟔
For 𝒙𝟏 = −𝟏, 𝒙𝟐 = 𝟏, 𝒀 = −𝟏 𝟐 + 𝟏 𝟐 + 𝟏 −𝟐 = −𝟐

Dr. Mona Nagy ElBedwehy 35


Hebbian Learning Example
For 𝒙𝟏 = 𝟏, 𝒙𝟐 = −𝟏, 𝒀 = 𝟏 𝟐 + −𝟏 𝟐 + 𝟏 −𝟐 = −𝟐
For 𝒙𝟏 = 𝟏, 𝒙𝟐 = 𝟏, 𝒀 = 𝟏 𝟐 + 𝟏 𝟐 + 𝟏 −𝟐 = 𝟐

✓ The results are all compatible with the original table.


✓ Decision Boundary
𝟐𝒙𝟏 + 𝟐𝒙𝟐 – 𝟐𝒃 = 𝟎
Since bias, 𝒃 = 𝟏, so the decision boundary is
𝟐𝒙𝟏 + 𝟐𝒙𝟐 – 𝟐 = 𝟎
𝒙𝟏 + 𝒙𝟐 – 𝟏 = 𝟎

Dr. Mona Nagy ElBedwehy 36


Hebbian Learning Example

Dr. Mona Nagy ElBedwehy 37


Hebbian Learning Example

Dr. Mona Nagy ElBedwehy 38


Hebbian Learning Example
Answer (B)
✓ Hebb rule applied unsuccessfully to AND using binary activations.

Inputs Target Weight Changes Wights


𝒙𝟏 𝒙𝟐 bias 𝒚 △ 𝒘𝟏 △ 𝒘𝟐 △𝒃 𝒘𝟏 𝒘𝟐 bias
1 1 1 1 1 1 1 1 1 1
1 0 1 0 0 0 0 1 1 1
0 1 1 0 0 0 0 1 1 1
0 0 1 0 0 0 0 1 1 1

✓ Clearly, if the target is 0, no learning occurs.


Dr. Mona Nagy ElBedwehy 39
Hebbian Learning Example
Example (4) Apply Hebb net to classify the letters I and O. Each
pattern is either the letter ‘I’ or the letter ‘O’ represented by 𝟑 × 𝟑
pixel metrices.

Dr. Mona Nagy ElBedwehy 40


Hebbian Learning Example

Dr. Mona Nagy ElBedwehy 41


Hebbian Learning Example
Answer
✓ Set all weights to zero, 𝒘𝒊 = 𝟎 and
the bias to zero.

Inputs Target
Pattern 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒙𝟓 𝒙𝟔 𝒙𝟕 𝒙𝟖 𝒙𝟗 𝒃 𝑦
I 1 1 1 −1 1 −1 1 1 1 1 1
O 1 1 1 1 −1 1 1 1 1 1 −1

Dr. Mona Nagy ElBedwehy 42


Hebbian Learning Example
Inputs Target
Pattern 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒙𝟓 𝒙𝟔 𝒙𝟕 𝒙𝟖 𝒙𝟗 𝒃 𝑦
I 1 1 1 −1 1 −1 1 1 1 1 1
O 1 1 1 1 −1 1 1 1 1 1 −1

△ 𝒘𝒊 = 𝒙𝒊 𝒚
Weight Changes
△ 𝒘𝟏 △ 𝒘𝟐 △ 𝒘𝟑 △ 𝒘𝟒 △ 𝒘𝟓 △ 𝒘𝟔 △ 𝒘𝟕 △ 𝒘𝟖 △ 𝒘𝟗 △𝒃

1 1 1 −1 1 −1 1 1 1 1

−1 −1 −1 −1 1 −1 −1 −1 −1 −1

Dr. Mona Nagy ElBedwehy 43


Hebbian Learning Example
P 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒙𝟓 𝒙𝟔 𝒙𝟕 𝒙𝟖 𝒙𝟗 𝒃 𝑦
I 1 1 1 −1 1 −1 1 1 1 1 1

𝒘𝒊 𝐧𝐞𝐰 = 𝒘𝒊 𝐨𝐥𝐝 +△ 𝒘𝒊
O 1 1 1 1 −1 1 1 1 1 1 −1
Weight Changes
△ 𝒘𝟏 △ 𝒘𝟐 △ 𝒘𝟑 △ 𝒘𝟒 △ 𝒘𝟓 △ 𝒘𝟔 △ 𝒘 𝟕 △ 𝒘 𝟖 △ 𝒘𝟗 △𝒃
1 1 1 −1 1 −1 1 1 1 1
−1 −1 −1 −1 1 −1 −1 −1 −1 −1
Weights
𝒘𝟏 𝒘𝟐 𝒘𝟑 𝒘𝟒 𝒘𝟓 𝒘𝟔 𝒘𝟕 𝒘𝟖 𝒘𝟗 𝒃
1 1 1 −1 1 −1 1 1 1 1
0 0 0 −2 2 −2 0 0 0 0

Dr. Mona Nagy ElBedwehy 44


Dr. Mona Nagy ElBedwehy 45
Hebbian Learning Example
Example (5) Apply Hebb net to classify the letters M and L. Each
pattern is either the letter ‘M’ or the letter ‘L’ represented by 𝟓 × 𝟓
pixel metrices.

Dr. Mona Nagy ElBedwehy 46


Hebbian Learning Example

Dr. Mona Nagy ElBedwehy 47


Hebbian Learning Example
Answer
✓ Set all weights to zero, 𝒘𝒊 = 𝟎 and the bias to zero.

𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒙𝟓 𝒙𝟔 𝒙𝟕 𝒙𝟖 𝒙𝟗 𝒙𝟏𝟎 𝒙𝟏𝟏 𝒙𝟏𝟐 𝒙𝟏𝟑 𝒙𝟏𝟒 𝒙𝟏𝟓 𝒙𝟏𝟔 𝒙𝟏𝟕 𝒙𝟏𝟖 𝒙𝟏𝟗 𝒙𝟐𝟎 𝒙𝟐𝟏 𝒙𝟐𝟐 𝒙𝟐𝟑 𝒙𝟐𝟒 𝒙𝟐𝟓

M 1 −1 −1 −1 1 1 1 −1 1 1 1 −1 1 −1 1 1 −1 −1 −1 1 1 −1 −1 −1 1
L 1 −1 −1 −1 −1 1 −1 −1 −1 −1 1 −1 −1 −1 −1 1 −1 −1 −1 −1 1 1 1 1 1

Dr. Mona Nagy ElBedwehy 48


Hebbian Learning Example
𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒙𝟓 𝒙𝟔 𝒙𝟕 𝒙𝟖 𝒙𝟗 𝒙𝟏𝟎 𝒙𝟏𝟏 𝒙𝟏𝟐 𝒙𝟏𝟑 𝒙𝟏𝟒 𝒙𝟏𝟓 𝒙𝟏𝟔 𝒙𝟏𝟕 𝒙𝟏𝟖 𝒙𝟏𝟗 𝒙𝟐𝟎 𝒙𝟐𝟏 𝒙𝟐𝟐 𝒙𝟐𝟑 𝒙𝟐𝟒 𝒙𝟐𝟓

M 1 −1 −1 −1 1 1 1 −1 1 1 1 −1 1 −1 1 1 −1 −1 −1 1 1 −1 −1 −1 1
L 1 −1 −1 −1 −1 1 −1 −1 −1 −1 1 −1 −1 −1 −1 1 −1 −1 −1 −1 1 1 1 1 1
△ 𝒘𝒊 = 𝒙𝒊 𝒚 𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚 = 𝟏

M 1 −1 −1 −1 1 1 1 −1 1 1 1 −1 1 −1 1 1 −1 −1 −1 1 1 −1 −1 −1 1
L −1 1 1 1 1 −1 1 1 1 1 −1 1 1 1 1 −1 1 1 1 1 −1 −1 −1 −1 −1

𝒘𝒊 𝐧𝐞𝐰 = 𝒘𝒊 𝐨𝐥𝐝 +△ 𝒘𝒊 𝒃 𝐧𝐞𝐰 = 𝒃 𝐨𝐥𝐝 + 𝒚 = 𝟎

M 1 −1 −1 −1 1 1 1 −1 1 1 1 −1 1 −1 1 1 −1 −1 −1 1 1 −1 −1 −1 1
L 0 0 0 0 2 0 2 0 2 2 0 0 2 0 2 0 0 0 0 2 0 −2 −2 −2 0

Dr. Mona Nagy ElBedwehy 49


Hebbian Learning
The following phenomenon occurs when
▪ If two neighbor neurons are operating in the same phase at
the same period of time, then the weight between these
neurons should increase.
▪ For neurons operating in the opposite phase, the weight
between them should decrease.
▪ If there is no signal correlation, the weight does not change,
the sign of the weight between two nodes depends on the
sign of the input between those nodes.
▪ When inputs of both the nodes are either positive or
negative, it results in a strong positive weight.

Dr. Mona Nagy ElBedwehy 50


Hebbian Learning
▪ If the input of one node is positive and negative for the other, a
strong negative weight is present.

Dr. Mona Nagy ElBedwehy 51

You might also like