M. Raihan
M. Raihan
Raihan
Email: [email protected]
Neural Networks
Neural Networks
Can put lots of McCulloch & Pitts neurons
together.
Connect them up in any way we like.
In fact, assemblies of the neurons are capable of
universal computation.
Can perform any computation that a normal computer can.
Just have to solve for all the weights wij
21-Sep-17 3
Neural Networks
21-Sep-17 4
The Perceptron Network
21-Sep-17 5
Training Neurons
Adapting the weights is learning
How does the network know it is right?
How do we adapt the weights to make the network
right more often?
Training set with target outputs.
Learning rule.
21-Sep-17 6
A Simple Perceptron
It’s a single-unit network
Change the weight by an
amount proportional to
the difference between
the desired output and
the actual output.
21-Sep-17 7
Updating the Weights
Wij Wij+ Wij
We want to change the values of the weights
Aim: minimize the error at the output
If E = t-y, want E to be 0
Use:
21-Sep-17 8
The Learning Rate
ɳ controls how much to change by weights.
If we could miss it, ɳ = 1.
Weight change a lot, whenever the answer is wrong.
Makes the network unstable.
Small ɳ
Weights need to see the inputs more often before they change
significantly.
Network takes longer to learn.
But, more stable network.
21-Sep-17 9
Bias Input
What happens when all the inputs to a neuron is zero?
It doesn’t matter what the weights are
The only way that we can control whether neuron fires or not is
through the threshold.
That’s why threshold should be adjustable.
Changing the threshold requires an extra parameter that we need to
write code for.
We can add an extra input weight to the neuron, with
the values of the input to that weight always being fixed.
h = 𝒎 𝒊=𝟏 𝒙𝒊 𝒘𝒊 21-Sep-17 10
Biases Replace Thresholds
21-Sep-17 11
Training a Perceptron
21-Sep-17 12
Training a Perceptron
21-Sep-17 13
Training a Perceptron
21-Sep-17 14
Linear Separability
21-Sep-17 15
More Than One Input Neuron
The weights for each neuron separately describe a straight line.
21-Sep-17 16
Perceptron Limitations
A single layer perceptron can only learn
linearly separable problems.
Boolean AND function is linearly separable, whereas
Boolean XOR function (and the parity problem in
general) is not.
21-Sep-17 17
Linear Separability
21-Sep-17 18
What Can Perceptrons Represent?
21-Sep-17 19
Thank You
21-Sep-17 20