0% found this document useful (0 votes)
47 views7 pages

Clasification of Mango (Mangifera Indica L) Fruit Varieties Using CNN

Uploaded by

faesal86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views7 pages

Clasification of Mango (Mangifera Indica L) Fruit Varieties Using CNN

Uploaded by

faesal86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Classification of Mango (Mangifera Indica L.

) fruit
varieties using Convolutional Neural Network
Sapan Naik Hinal Shah
Babu Madhav Institute of Information Technology, Babu Madhav Institute of Information Technology,
Uka Tarsadia University, Uka Tarsadia University,
Surat, India. Surat, India.
[email protected] [email protected]

Abstract – Automation in classification and grading of mango Current grading systems have few limitations. Those are
(Mangifera Indica L.) is important for farmers as well as time-consuming, laborious, less efficient, monotonous as well
consumers for identifying quality of mango. This paper addresses as inconsistent while automatic systems provide rapid,
the issue of classifying mango fruit based on non-destructive
economic, hygienic, consistent and objective assessment. This
method. Fruit classification is the prerequisite stage of fruit
grading. Advancement in deep learning and convolution neutral reason motivated us to propose work in field of post harvesting
network (CNN) have been proved to be a boon for the image and mainly in fruit grading. Classification being the initial
classification and recognition tasks which can be used for fruit stage of fruit grading, we have considered it.
recognition. Here in this paper pre-trained CNN is used for
mango classification. Expert knowledge has been collected and In this paper, mango (Mangifera Indica L.) classification is
mango image dataset of 1028 images has been created with seven
performed on seven different varieties of mango fruit as mango
different categories of mango. Mango categories are Kesar,
Rajapuri, Totapuri, Langdo, Aafush, Dahseri and Jamadar. CNN is the extraordinary product that substantiates the high-quality
is tuned and trained according to mango dataset. Four modern standards and ample of nutrients filled in it. There are 1,000
CNN network architectures are compared namely Inception, varieties of mango cultivated in India but only small numbers
Xception, DenseNet and MobileNet. Experiment results show that of varieties cultivate commercially all over India or in other
the MobileNet model is the fastest and DenseNet is the slowest in countries. With the largest area under mango cultivation,
terms of execution time out of all four models. Xception and
Gujarat is the strapping mango growing state for economic
DenseNet model give highest accuracy of 91.42%. Accuracy
achieved by Inception is 90% and time required to grade a single growth stretching from Jamadar, Totapuri, Dahseri, Neelum,
mango is 9.78 seconds. Mango classification is also performed Langdo, Kesar, Payri, Alphonso to Rajapuri [3].
using traditional feature extraction method with classifier where
Histogram of Oriented Gradient (HOG), Scale-Invariant Feature Current trend shows the popularity of deep learning and
Transform (SIFT) and Chain code methods are used as features convolution neutral network (CNN) [4]. The development in
extractor and multiclass Support Vector Machine (SVM) as
deep learning and CNN has been led the field of computer
classifier. 80% accuracy is achieved using this method.
vision and specific of image classification since long time.
Index Terms—Convolutional Neural Network(CNN), Mango, Deep learning instinctively acquires the features of the images
Classification. and extracts the global features and contextual details, which
drastically reduce the errors in the image recognition [5]. It all
I. INTRODUCTION started when the Hinton’s team received the championship of
Agriculture plays a crucial role in the economy of India as the ImageNet image classification, at that time surveillance of
it comprises 16.5% of GDP by sector (2016 est.) with deep learning has been observed [6]. QuocNet, AlexNet,
approximately 50% of labor force (2014 est.) and 10% of total Inception, BN-Inception-v2 are few of the models proposed
export. Even in India, agriculture is sole financial source for later and exhibit superior results. The 70% improvement of the
70% of the agricultural labor and common man [1]. For results have been observed as Google trained random 10
developing country like India, post harvesting procedures are million images with neural network of 9 layers and
bigger issues. Post harvesting phase normally contains classification performed on ImageNet data set of 2000
processes like cooling, cleaning, sorting, grading and packing. categories [7]. PASCAL-VOC- the state-of-the-art detection
Sorting and grading are important aspects for analyzing fruits. framework [8] consists of two stages.
There are some parameters of non-destructive fruit
classification and grading like composition, defects, size, Color (RGB) and Near-Infrared (NIR) are combined using
shape, strength, flavor and color [2]. early and late fusion methods and used in Faster R-CNN model
to detect seven different fruits [9]. Pre-trained R-CNN takes
four hours to process fully and to train new fruit. Fruit organic carrot which give 93.8% average accuracy. 86%
recognition system presented in [10] uses selective search classification accuracy is achieved on 15 classes of 2635
algorithm and fruit image’s entropy for selecting fruit’s region; images of fruits in [16]. Here color and texture feature are used
which given as input to CNN. Finally voting mechanism is with minimum distance classifier. Co-occurrence and
used for classification. K-means feature learning is used as pre- statistical features are computed from the sub-bands of
training process with CNN for Weed identification in [11] Wavelet transform.
where 92.89% accuracy achieved and concluded that fine
tuning can improve results. For online prediction of food This paper presents the solution of mango categories
materials, a fast auto-clean CNN model is proposed in [12]. classification using four modern CNN architectural models
Auto-clean task and multiclass prediction task based adapting and also using traditional feature extraction method. Paper is
learning are used by the model. The proposed work gives organized as; material and methods are discussed in section II.
precise and fast output. 7 classes of Mixed Crops images Result discussion of experiments has been done in section III
mainly oil radish, barley, weed, stump, soil, equipment and and finally work is concluded with future directions.
unknown are classified using Deep Convolutional Neural
Network in [13] where VGG-16’s modified version is used for II. MATERIAL AND METHODS
implementation. 79% accuracy is achieved which shows the For mango classification, we have assembled the data of
potential of deep learning. almost 100 different varieties of mangoes with their features
from Navsari Agriculture University, Gujarat and Paria Farm.
Multi-class kernel support vector machine (kSVM) with Seven easily available and more popular mangoes in south
color histogram, texture and shape features are used for fruits Gujarat region have been selected for experiment. These
classification in [14]. Split-and-merge algorithm is used for mangoes are Kesar, Rajapuri, Totapuri, Langdo, Aafush,
segmentation purpose. Principal component analysis (PCA) is Dahseri and Jamadar. Mix image dataset for mango
used for dimensionality reduction; Winner-Takes-All SVM, classification has been created. Details of Mix image dataset is
Max-Wins-Voting SVM, and Directed Acyclic Graph SVM given in below fig. 1.
are used as multiclass SVM. SVM is used with linear kernel
Homogeneous Polynomial kernel, and Gaussian Radial Basis Below section gives the overview of CNN and how to tune,
kernel. Results conclude that Max-Wins-Voting SVM with train and implement the CNN. CNN consists of multiple levels
Gaussian Radial Basis kernel performs best with 88.2% where each level consists of multiple training sets. Input and
accuracy and Directed Acyclic Graph SVM is the fastest. Crop output of each training stage are images or sets (which are
and weed plants are discriminated without segmentation in known as feature maps) [17].
[15] where Random Forest classifier, Markov Random Field
and Interpolation methods are used. Experiments perform on

Fig. 1. Mix image dataset - details with sample images


be found in [21].
A. Overview of CNN
Basic structure of CNN contains four layers namely
convolution, non-linearity, pooling and fully connected. The
numbers of each layer depend on the structure one has used.
Visualization of CNN architecture is available on [18].

1) Convolution Layer
First layer of CNN is always Convolutional Layer and input
of this layer is always an input image. To understand the
working of convolutional layer, let’s suppose one image (A)
having 32 x 32 x 3 size of pixel values. Assume that one
spotlight is shining at the top left corner of an image and
shining of spotlight covers the 3 x 3 area. Visualize this
spotlight sliding across all the areas of the input image as
shown in Fig.2(A).

This spotlight is called a filter or neuron or kernel in


machine learning field. The section on which it slides over, is
called receptive field. Now this filter is also the array of
numbers which are called weights or parameters. Here the
filter’s dimensions are 3x3x3 because we need to take depth of
input image into consideration. Depth of filter is same as that
of depth of input image. Initial position of the filter would be
at the top left corner of an image. While filter is sliding (called
convolving), it is multiplying the values in the filter with the (B)
original pixel values of the image. This multiplication is also
known as computing element wise multiplications. One single
number is computed by summing up all these multiplications.
For every location of image, same process gets repeated. Here
next step would be moving the filter to the right by 1 pixel or
unit, and repeat it. All these moves produce one new number.
(C)
Once sliding gets over, we will get reduced two-dimensional Fig. 2. Operation in convolution layer (A); 3×3 filter representation
array of numbers. This two-dimensional array, produced after (B); convolving operation when feature is available (C); convolving
convolving is called activation map or feature map [19]. operation when feature is not available

What actually happens in Convolutional layer is that, 2) Non-linearity Layer


instead of one filter, we use multiple filters of same kind where Various activation functions are applied on this layer.
each filter is used for identifying different feature of image. Activation functions are basically Rectified Linear Units
These filters are actually working as feature identifiers. Small (Relu), sigmoid, tanh. Relu is more preferable because the
example of curve detector filter is shown in Fig.2(B). As training process gets faster due to it.
shown in Fig.2(B), if 7x7 filter which represent curve, come
across same kind of curve in mango image, it will get multiply 3) Pooling Layer
and after summing up, produce one big number. This number Pooling layer comes normally after the convolutional layer
can be treated as same feature which is available in image. to decrease the spatial size. Here the size in terms of height and
Same way if curve is not present in image, filters produce width gets decreased but not depth. As the number of
result, which tends to 0 as shown in Fig.2(C). So, we can say parameters gets reduced; computation also gets reduced. One
that particular feature is not available in image. more benefit of pooling is, overfitting is being avoided due to
less number of parameters. Max pooling is the most common
Activation map is the output of the convolutional layer form of pooling. Example shown in Fig.3. Filter of the size
where the activation map illustrates the parts of image; where m*m with maximum operator is used and apply over the m*m
there are most likely to be feature available. More number of sized part of the image. Instead of maximum operator, if
filters tends to give greater depth of the activation map; and average is used, it will be called as average pooling.
this means more information about the input volume one can
get [20]. More details about filters and their visualization can
Full dataset is not used to train the model. Normally 70% to
80% images are used for training the model and remaining
images are used for validation and testing. Suppose, we have
total 1600 training images. we split them into small batches of
size 16 or 32. This is called batch-size. So, it will take 50 or
100 rounds or iterations to complete full training. This is called
epoch. After all, above process gets over, using the same
procedure as the training, model will predict the output easily.
Fig. 3. Example of max pooling in CNN
This time model does not learn from the new input.

The common pooling is being done with the filter of size C. Implementation of CNN
2*2 and a stride of 2. It basically reduces the half size of input Using transfer learning technique, lots of work get reduced.
image. After this in flattening step, feature map is converted Here fully trained model is taken which is pre-trained for a set
into single vector because it gives as an input to artificial neural of categories like ImageNet. On this model, the existing
network. weights for new classes can be retrained. All the other layers
remain untouched and only the final layer is retrained [23].
4) Fully Connected Layer This process is faster and does not require graphical processing
In fully connected layer of neural network, input is received unit(GPU). Instead of training full new network, this is better
by each neuron by previous layer’s neurons. Output of this alternative and it gives very good results too.
layer is computed using matrix multiplication, followed by
bias offset. All neurons in previous layer connects to single III. EXPERIMENTS AND RESULT DISCUSSION
neuron to generate specific output.
Algorithm for digit classification is proposed in [24] using
HOG and multiclass SVM. Some more features are found in
B. Tuning CNN Model same reference where SIFT technique is used to extract the
To make the use of CNN, firstly we need to create its model. features from the image. For classification of mango, shape
There are three phases through which tuning of CNN model is plays an important role and the chain code is good shape
done: i) Training, ii) Validate, and iii) Testing. feature extractor [25]. So, we have designed our own chain
code for shape feature extraction. Based on the study, basic
In Training phase, the network is prepared through which classification method is implemented by combining features
do classification. In Validation phase, the calibration is extracted using HOG, SIFT and Chain code techniques; and
provided for the network. It corrects the classification multiclass SVM as classifier.
performed by the training phase. After all the corrections,
model gets ready for testing in Testing phase. Procedure is simple. Inputted image is segmented. Features
are extracted from segmented image; and provide extracted
For designing of neural network, one needs to decide many features as input to classifier. Due to white background of our
things like arrangement of the layers, types of layers used dataset, simple thresholding method is used to segment image.
inside, number of neurons in each layer etc. It is complex to Later HOG, SIFT and Chain code features are extracted and
design the architecture of neural network. It is difficult to provided as input to multiclass SVM.
prepare our own architecture; some standard architectures are
available which can be used directly for our work such as We took 120 training images of Kesar, Aafush, Rajapuri,
AlexNet, GoogleNet, Inception, ResNet, VGG, etc. In the Totapuri, Jamadar and Dahseri (Langdo image is not
beginning, it is preferable to make the use of standard network considered, only six categories are experimented). Each
architectures [22]. category has 20 images. Same way 120 test images are taken.
Script has been written in Python for implementation.
Once the architecture of the network is decided, the next Experiments give 100% accuracy while giving training
important decision is of weights and biases (the parameters of images; for testing purpose and 80% accuracy is achieved in
the network). Backward propagation is used to set parameters case of test images. Time required for classifying single mango
in best of manner. Once parameters get finalize and training is 4.1 seconds for our experiment.
gets completed all parameters and architecture are saved in
binary files. These files are known as model. To test the new Experiments are performed with the use of MacBook
input image, this model is loaded and this model will predict Pro(13-inch,2016) having 2.9GHz Intel Core i5 processor,
the output [22]. 8GB 2133 MHz LPDDR3 memory and Intel Iris Graphics 550
1523MB graphics card. For Implementing CNN, we have used maintained. The reason for choosing only these four CNN
TensorFlow. For initial experiments, pre-trained Inception v3 models for experiment is that, accuracy achieved by these
model is used. From this model, old top layer is removed and models are better compared to other models [26].
retrained it with our mango images because none of the mango
species are there in the original ImageNet classes. Other than Table II and Table III show the experiment results. Values
top layer, all the lower layers have been trained for classifying in Table II represents the number of correct prediction out of
1000 classes in ImageNet dataset. The weights and biases are inputted 10 images for each category of mango for all four
directly used to distinguish between new objects recognition models. Table III shows the overall accuracy, error rates and
tasks. This is the power of transfer learning as discussed before time required for classification of single mango for all four
[5]. models.
TABLE II
RESULTS FOR NUMBER OF CORRECT PREDICTIONS (INPUTTED IMAGES ARE
Initial experiment is done to conclude proper value of
10 FOR EACH CATEGORIES)
training images and epoch for our dataset. We have tested Inception Xception DenseNet MobileNet
Inception v3 model; by providing training for individual v3
mango category and testing the same. We have used different Aafush 9 10 10 10
Jamadar 7 8 10 10
number of training images (10,20,30 and 40) and epoch values
Dahseri 10 9 8 10
(1500, 2000). Based on this, accuracy for classifying Kesar 8 10 9 9
individual mango category is derived. Table I shows the Langdo 9 10 10 10
experiment results. The learning rate is set to 0.01; training Rajapuri 10 10 10 6
Totapuri 10 7 7 7
batch size is taken as 100; testing and validation percentage is
kept to 10; for tutoring of deep CNN learning. Based on our
TABLE III
initial experiment we have concluded that 60 images/samples RESULT SHOWING PERFORMANCE OF ALL CNN MODELS
of each category for training and 10 images of each category Accuracy (%) Error Rate (%) Time(Seconds)
for testing are good enough and selected epoch value for final Inception v3 90 10 9.78
experiment is 2000. Xception 91.42 8.57 5.10
TABLE I DenseNet 91.42 8.57 11.52
ACCURACY FOR DIFFERENT TRAINING IMAGES AND EPOCH VALUES ON SEVEN MobileNet 88.57 11.42 1.09
CATEGORIES OF MANGO
10 20 30 40
Aafush 1500 98 98 96 94
2000 98 99 97 94
Dahseri 1500 67 68 58 50
2000 74 75 66 50
Jamadar 1500 99 99 98 97
2000 99 99 98 97
Kesar 1500 93 82 71 63
2000 95 86 79 63
Langdo 1500 82 94 87 89
2000 85 96 90 89
Totapuri 1500 94 89 86 70
2000 95 91 90 70
Rajapuri 1500 95 97 96 91
2000 97 98 97 91

Fig. 4. Sample images of wrongly predicted mangoes


Data sets of 1082 images have been prepared which include
137 Aafush, 63 Dahseri, 146 Jamadar, 321 Kesar, 82 Langdo,
By observing confusion matrix of all four models, we have
172 Rajapuri and 161 Totapuri samples. Photoshop CS5 is
concluded that misclassification majorly happens with
used for pre-processing of all image samples. The size of all
Jamadar and Totapuri category of mango. Figure 4 shows
the images is 512x512x3. As mentioned above, after initial
some of the sample images of the wrongly predicted mango by
experiment we have used 60 samples (for increasing training
four models. As depicted in samples, the shape, color, size and
set, the images have been randomly transformed to 90 degrees
even texture of these three types are nearly same. MobileNet
clockwise and anti-clockwise.) for training sets of each
model has wrongly predicted 4 samples of Rajapuri mango,
category. For testing, 10 images of each category (total 70) are
which is very much unexpected. The shape of Rajapuri is very
selected. Four CNN architecture models namely Inception v3,
different from other mango varieties and even other three
Xception, DenseNet and MobileNet are tested for mango
models have predicted perfectly.
classification. For all CNN architecture model same
configuration (in terms of training and testing images, epoch
We have not compared mango classification results with
value, learning rate, training batch and validation percentage)
other mango classification work because; the datasets used for [6] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, "Imagenet
classification with deep convolutional neural networks," In Advances in
the experimentation are prepared by us. Other reason is, we neural information processing systems, pp. 1097-1105, 2012.
have not come across same mango category classification [7] Le, Quoc V, "Building high-level features using large scale unsupervised
work yet. Though we have compared our results with research learning," In 2013 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP), pp. 8595-8598. IEEE, 2013.
work presented in [27] where authors have tried fruit
[8] Everingham, Mark, SM Ali Eslami, Luc Van Gool, Christopher KI
classification using neural network. Table 4 presents the Williams, John Winn, and Andrew Zisserman, "The pascal visual object
results. classes challenge: A retrospective," International journal of computer
TABLE IV vision, 111, no. 1, pp. 98-136, 2015.
COMPARISONS OF PROPOSED METHOD WITH DIFFERENT ALGORITHMS [9] Sa, Inkyu, Zongyuan Ge, Feras Dayoub, Ben Upcroft, Tristan Perez, and
Algorithm Classification accuracy (%) Chris McCool, "Deepfruits: A fruit detection system using deep neural
GA–FNN 84.8 networks," Sensors, 16, no. 8, pp. 1222, 2016.
PSO–FNN 87.9 [10] Hou, Lei, QingXiang Wu, Qiyan Sun, Heng Yang, and Pengfei Li, "Fruit
recognition based on convolution neural network," In 2016 12th
ABC–FNN 85.4
International Conference on Natural Computation, Fuzzy Systems and
kSVM 88.2
Knowledge Discovery (ICNC-FSKD), pp. 18-22. IEEE, 2016.
FSCABC–FNN 89.1
[11] Tang, JingLei, Dong Wang, ZhiGuang Zhang, LiJun He, Jing Xin, and
Deep Learning - CNN 91.42 (Xception and DenseNet Yang Xu, "Weed identification based on K-means feature learning
models) combined with convolutional neural network," Computers and
electronics in agriculture 135, pp. 63-70, 2017.
[12] Chen, Hao, Jianglong Xu, Guangyi Xiao, Qi Wu, and Shiqin Zhang,
"Fast auto-clean CNN model for online prediction of food
IV. CONCLUSION AND FUTURE DIRECTIONS
materials," Journal of Parallel and Distributed Computing, 117 pp. 218-
Using deep learning - CNN seven different categories of 227, 2018.
[13] Mortensen, Anders Krogh, Mads Dyrmann, Henrik Karstoft, Rasmus
mango are classified and accuracy achieved using the Nyholm Jørgensen, and René Gislum, "Semantic segmentation of mixed
experiments with Inception v3 is 90% and time required to crops using deep convolutional neural network," In International
grade a single mango is 9.78 seconds. Our experiment results Conference on Agricultural Engineering, 2016.
show that the MobileNet model is the fastest and DenseNet is [14] Zhang, Yudong, and Lenan Wu, "Classification of fruits using computer
vision and a multiclass support vector machine," Sensors 12, no. 9,
the slowest in terms of execution time of all four models. pp.12489-12505, 2012.
Xception and DenseNet model give highest accuracy of [15] Haug, Sebastian, Andreas Michaels, Peter Biber, and Jorn Ostermann,
91.42%. Major misclassification occurs with the Jamadar and "Plant classification system for crop/weed discrimination without
segmentation," In 2014 IEEE Winter Conference on Applications of
Totapuri mango category. Due to their same global features, it Computer Vision (WACV), pp. 1142-1149, IEEE, 2014.
is difficult to get accuracy. Local features can be implemented [16] Arivazhagan, S., R. Newlin Shebiah, S. Selva Nidhyanandhan, and L.
and incorporated to increase classification rate. Proposed Ganesan, "Fruit recognition using color and texture features," Journal of
Emerging Trends in Computing and Information Sciences, 1, no. 2, pp.
methods can be made generalize for fruits of south Gujarat, 90-94, 2010.
India. Fine tuning of parameters and combining machine [17] Bhandare, Ashwin, Maithili Bhide, Pranav Gokhale, and Rohan
learning methods with CNN can improve the results accuracy. Chandavarkar, "Applications of Convolutional Neural
Networks," International Journal of Computer Science and Information
Work can be done to decrease time required to classify a Technologies, pp. 2206-2215, 2016.
mango. Based on classification, grading and detection of skin [18] “Convolutional Neural Network,” Neural Network - Multi Step Ahead
disease can be identified. Prediction - MATLAB Answers - MATLAB Central. [Online].
Available: in.mathworks.com/solutions/deep-learning/convolutional-
neural-network.html. Date:17.06.2018.
ACKNOWLEDGMENT [19] Nielsen, Michael A. "Neural networks and deep learning
(2015)," [Online]. Available: http://neuralnetworksanddeeplearning.
The authors acknowledge the help of Yash Rana, Yash com (2016).
Patel and Purva Desai in implementation, dataset preparation [20] Deshpande, Adit, “A Beginner's Guide To Understanding Convolutional
and content writing work. Neural Networks,” A Beginner's Guide To Understanding Convolutional
Neural Networks – Adit Deshpande – CS Undergrad at UCLA ('19).
[Online]. Available: adeshpande3.github.io/A-Beginner's-Guide-To-
REFERENCES Understanding-Convolutional-Neural-Networks/. Date:15/06/2018.
[1] Naik, Sapan, and Bankim Patel, "Machine Vision based Fruit [21] Zeiler, Matthew D., and Rob Fergus, "Visualizing and understanding
Classification and Grading-A Review," International Journal of convolutional networks," In European conference on computer vision,
Computer Applications, 170, no. 9, pp. 22-34, July. 2017. pp. 818-833. Springer, Cham, 2014.
[2] Slaughter, D. C., "Nondestructive Maturity Assessment Methods for [22] “Tensorflow Tutorial 2: Image Classifier Using Convolutional Neural
Mango," University of California, Davis, pp. 1-18, 2009). Network,” Tensorflow Tutorial 2: Image Classifier Using Convolutional
[3] Naik, Sapan, and Bankim Patel, "Thermal imaging with fuzzy classifier Neural Network – CV-Tricks.com. [Online]. Available. cv-
for maturity and size based non-destructive mango (Mangifera Indica L.) tricks.com/tensorflow-tutorial/training-convolutional-neural-network-
grading," in Proc. Emerging Trends & Innovation in ICT (ICEI), 2017 for-image-classification/.Date: 15/06/2018.
International Conference, pp. 15-20. IEEE, 2017. [23] Donahue, Jeff, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang,
[4] Google Trends, Google. [Online]. Available: trends.google.co.in/trends/. Eric Tzeng, and Trevor Darrell, "Decaf: A deep convolutional activation
Date:17.06.2018. feature for generic visual recognition," In International conference on
[5] TensorFlow Tutorials. [Online]. Available: machine learning, pp. 647-655. 2014.
https://www.tensorflow.org/tutorials. Date:17.06.2018. [24] “Digit Classification Using HOG Features,” MathWorks. [Online].
Available: https://in.mathworks.com/help/vision/examples/digit-
classification-using-hog features.html. Date: 15.07.2108.
[25] Moreda, G. P., M. A. Muñoz, M. Ruiz-Altisent, and A. Perdigones,
"Shape determination of horticultural produce using two-dimensional
computer vision–A review," Journal of Food Engineering, 108, no. 2 pp.
245-261, 2012.
[26] Gogul Ilango, “Using Keras Pre-trained Deep Learning models for your
own dataset,” [Online]. Available:
https://gogul09.github.io/software/flower-recognition-deep-learning,
Date:15.07.2018.
[27] Zhang, Yudong, Shuihua Wang, Genlin Ji, and Preetha Phillips, "Fruit
classification using computer vision and feedforward neural
network," Journal of Food Engineering, 143, pp.167-177, 2014.

You might also like