Deep Residual Learning For Image Recognition
Deep Residual Learning For Image Recognition
Image Recognition
Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Presenters
Common convention
Unable to scale
for image recognition Depth of network is
networks beyond 20
was using CNNs crucially important
layers
(VGG, AlexNet)
Motivation Network scaling
Gradient degradation
Optimization issues
Approach
•Types of Shortcuts:
•Function of Shortcuts:
•Bottleneck Architecture:
•Setup:
•Results:
Experimental •
•
Setup: ResNets tested up to 1,202 layers on CIFAR-10.
Findings:
Results •
•
ResNets trained successfully without degradation, even at 1,202 layers.
• Uses groups to
GoogLeNet learn different
features
• Uses Inception module
• 22 layers
• Parallel computing
Conclusion
and Future
Work
Summary