0% found this document useful (0 votes)
27 views14 pages

Google Net

GoogleNet is a 22-layer deep convolutional neural network that won the 2014 ImageNet challenge. It uses inception modules that apply 1x1, 3x3 and 5x5 convolutions in parallel to input features, followed by concatenation of the results. This allows it to incorporate multi-scale information more efficiently than standard CNNs. The inception module also uses dimensionality reduction via 1x1 convolutions to reduce computation cost. GoogleNet has a total of 27 layers including pooling layers and is made up of 9 inception modules.

Uploaded by

Mehak Smagh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views14 pages

Google Net

GoogleNet is a 22-layer deep convolutional neural network that won the 2014 ImageNet challenge. It uses inception modules that apply 1x1, 3x3 and 5x5 convolutions in parallel to input features, followed by concatenation of the results. This allows it to incorporate multi-scale information more efficiently than standard CNNs. The inception module also uses dimensionality reduction via 1x1 convolutions to reduce computation cost. GoogleNet has a total of 27 layers including pooling layers and is made up of 9 inception modules.

Uploaded by

Mehak Smagh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

GoogleNet

• 2014 winner of ImageNet Large Scale Visual


Recognition Challenge (ILSVRC) competition.
GoogleNet
• 22 layers (layers with parameters)
• 27 (Including pooling layers)
• 9 Inception units
GoogleNet
Inception module
GoogleNet
Inception module with dimensionality reduction
Inception module
• Till now
– either apply 3×3, 5×5, or 7×7 or pooling operation.
• Computes 1×1, 3×3 and 5×5 convolutions (in
parallel) for the given input image.
• Provides fine feature details to bigger with
small convolutional kernel to large.
Applying various kernels on the given input image

• Computing using different kernels 1×1, 3×3


and 5×5 for the given input image result in
different output.
• So using appropriate padding in different
kernels we can attain same output.
Applying various kernels on the given input image

• Example
• 5×5×3 (F: 1×1 depth 3)  5×5
• 5×5×3 (F: 3×3 depth 3)  3×3
• 5×5×3 (F: 5×5 depth 3)  1×1
Applying various kernels on the given input image

• Example
• 5×5×3 (F: 1×1 depth 3)  5×5
• 5×5×3 (F: 3×3 depth 3, p=1)  5×5
• 5×5×3 (F: 5×5 depth 3, p=2)  5×5
Importance of 1×1 convolution

14×14×48 each element need (5×5×480) operations


i.e Total no. of operations = (14×14×48)×(5×5×480) = 112.9M

Total no. of operations (1×1) and (5×5)


= ((14×14×16)×(1×1×480)) + (14×14×48)×(5×5×16))
Total number of operations = 5.3M

https://medium.com/coinmonks/paper-review-of-googlenet-inception-v1-winner-of-ilsvlc-2014-image-classification-c2b3565a64e7
Average Pooling

https://medium.com/coinmonks/paper-review-of-googlenet-inception-v1-winner-of-ilsvlc-2014-image-classification-c2b3565a64e7
C. Szegedy et al., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594.
Inception module
• Input- 28*28*192
• 64 1*1 28*28*64
• 96 1*1 28*28*96 (3*3,128)  28*28*128
• 16 1*1 28*28*16 (5*5,32)  28*28*32
• Max pool(3*3) 26*26*192 28*28*32

• 28*28*256
References
1. C. Szegedy et al., "Going deeper with
convolutions," 2015 IEEE Conference on
Computer Vision and Pattern Recognition
(CVPR), 2015, pp. 1-9, doi:
10.1109/CVPR.2015.7298594.
2. https://medium.com/coinmonks/paper-review-of-
googlenet-inception-v1-winner-of-ilsvlc-2014-
image-classification-c2b3565a64e7
3. Deep Learning-Part1: NPTEL Course
https://nptel.ac.in/courses/106/106/106106184/#

You might also like