AlexNet
AlexNet
Deep Learning
300 (3)
Dr. Lekshmi R. R.
Asst. Prof.
Department of Electrical & Electronics
Engineering
Amrita School of Engineering 1
AlexNet
Neural Network
• VGG – by Alex and team
• Solves many overfitting related problem
• Looks similar to LeNet
• There are 60 million parameters
• Architecture includes:
– 5 Convolution layers
– 3 Fully connected layer
• includes output layer
• More filers per layer
– Hence called as Deep Neural Network
• Includes dropout
Architecture
Convolution
= 55
Convolution
= 55
Max pooling
• Input: 56x56x96
𝑛 + 2𝑝 − 𝑓
• Filter size: 3x3 +1
𝑠
• Stride: 2
55 + 2 × 0 − 3
+1
2
= 27
Max pooling
• Input: 56x56x96
𝑛 + 2𝑝 − 𝑓
• Filter size: 3x3 +1
𝑠
• Stride: 2
55 + 2 × 0 − 3
+1
2
= 27
Convolution layer
• Stride: 1 = 27
Max pooling
• Input:27x27x256
𝑛 + 2𝑝 − 𝑓
• Size: 3x3 𝑠
+1
• Stride: 2 27 + 2 × 0 − 3
+1
2
• Output:13x13x256
=13
Convolution layer
• Stride: 1 = 13
Convolution layer
• Stride: 1 = 13
Convolution layer
• Stride: 1 = 13
Max pooling
• Input:13x13x256
𝑛 + 2𝑝 − 𝑓
• Size: 3x3 𝑠
+1
• Stride: 2 13 + 2 × 0 − 3
+1
2
• Output:6x6x256
=6
Fully connected
• Input:6x6x256 (flatten)
• Neurons: 4096
Fully connected
• Input:4096
• Neurons: 4096
Fully connected (Output)
• Input:4096
• Activation: Softmax
• Neurons: 1000
Activity
Image Operation
Output
227x227x Conv 11x11
3 4 stride 55x55x96
96 kernels
Input Operation
Output
13x13x384 Conv 3x3
1 padding 13x13x256
256 kernels
Input Operation
Output
13x13x256 Max pool 3x3
2 stride 6x6x256
Thank you