Neural Networks
Neural Networks
SHAILESH S
Perceptron
➢First neural network learning model in the 1960’s
➢Basic concepts are similar for multi-layer models so this is a good learning tool
1 x1 + 1 x2 – 2.5 = 0
w1 w2 (x1,x2) F
1 1 (0,1) -1.5
1 1 (1,1) -0.5
2 2 (0,0) -2.5
2 2 (1,0) -0.5
2 2
2 x1 + 2 x2 – 2.5 = 0
Learning AND gate
Implementing AND gate
Tools
• Python
• Sklearn
Try It For
?
Multi Layer Perceptron(MLP)
4
-3
x O1
x y O1
-2 0 0 1
y
0 1 1
1 0 1
Shade indicate 1 (+ve region )
1 1 0
Solution XOR problem
x
1
y
1 O2
x y O2
0.5 0 0 0
0 1 1
Shade indicate 1 (+ve region)
1 0 1
1 1 1
Solution XOR problem
4
-4
x -3 O1 1.5
z
-2 1
y O2 1 O1 O2 z
1
0 0 0
0.5 0 1 0
1 0 0
1 1 1
Three layer networks
Three layer networks
▪ No connections within a layer
▪ No direct connections between input and output layers
▪ Fully connected between layers
▪ Often more than 3 layers
▪ Number of output units need not equal number of input units
▪ Number of hidden units per layer can be more or less than input or output
units
What do each of these layer do ?
▪1st layer draws ▪2nd layer combines ▪3rd layer can generate
linear boundaries the boundaries arbitrarily complex
boundaries
Backpropagation learning algorithm ‘BP’
𝑅𝑛 → −1,1
L2 = weight decay
• Regularization term that penalizes big weights
• Weight decay value determines how dominant regularization is during
gradient computation
• Big weight decay coefficient → big penalty for big weights
Early-stopping
• Use validation error to decide when to stop training
• Stop when monitored quantity has not improved after n subsequent epochs
• n is called patience
Implementation of XOR Gate MLP
Deep Neural Networks
Thank You
ANY QUERIES ?