Mango Classification Using Convolutional Neural Networks
Mango Classification Using Convolutional Neural Networks
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1729
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 11 | Nov 2018 www.irjet.net p-ISSN: 2395-0072
In this section we discuss the implementation of our system Transfer learning is a technique that shortcuts much of this
which involves the following steps: by taking a piece of a model that has already been trained on
1. Data Collection a related task and reusing it in a new model. In this tutorial,
2. Training and Testing we will reuse the feature extraction capabilities from
powerful image classifiers trained on ImageNet and simply
Data collection train a new classification layer on top.
The data collection phase involved collecting data from the The first phase analyzes all the images on disk and calculates
Indian market. . We intend to use this dataset solely and caches the bottleneck values for each of them.
for non-commercial, academic and research purposes. 'Bottleneck' is an informal term we often use for the layer
This dataset which we use to retrain the GoogLeNet just before the final output layer that actually does the
consists of 5093 images ranging over 5 classes and is classification. (TensorFlow Hub calls this an "image feature
insufficient to train a network as complex as GoogLeNet vector".) This penultimate layer has been trained to output a
from scratch. The GoogLeNet architecture [6] which we set of values that's good enough for the classifier to use to
use here for mango classification is a 22 layer deep distinguish between all the classes it's been asked to
neural network and was initially trained on the ImageNet recognize. That means it has to be a meaningful and compact
dataset[7] which consists of more than a one million images summary of the images, since it has to contain enough
ranging over thousand classes. The network was designed information for the classifier to make a good choice in a very
with practicality and computational efficiency in mind so small set of values.
that inference could be run on individual devices including
even those with limited resources for computational Once the bottlenecks are complete, the actual training of the
tasks. top layer of the network begins. The training accuracy shows
what percent of the images used in the current training
Hence, we use the weights from the GoogLeNet which is batch were labeled with the correct class. The validation
trained on the ImageNet dataset. accuracy is the precision on a randomly-selected group of
images from a different set.
We remove the final layer of the Inception v3 GoogLeNet
A new softmax and fully-connected layer is added and
model which is shown in Fig. and train a new final layer to
trained to identify new classes. For training the model we
classify the mango dataset.
run 4,000 training steps. Each step selects ten images at
random from the training set, finds their bottleneck value
from the cache, and feeds them to the final layer of the
CNN to get predictions. These predictions are compared
against the original labels to find the deviation. The final
layer's weights are updated in accordance with the
deviation observed through a back-propagation method.
This method is used in the Gradient Descent Algorithm
where the error rate is calculated at the output and
distributed back to the previous layers. The algorithm
Fig 2 Model architecture of Inception V3 model [8] adjusts the weights of the array. This phase goes under a
loop until minimum error rate is fetched.
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1730
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 11 | Nov 2018 www.irjet.net p-ISSN: 2395-0072
7. REFERENCES
From Fig. 4 we can see that the test image is a class Badami [5] “Image database: Mango „Kent,‟” 2014. [Online].
with an accuracy of 99% as compared to the predictions of Available:http://www.cofilab.com/portfolio/man goesdb/.
the remaining classes.
[6] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre
Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,
Vincent Vanhoucke, and Andrew
Rabinovich. "Going deeper with convolutions." In
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 1-9. 2015.
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1731