Paper 41-Convolutional Neural Network Architecture
Paper 41-Convolutional Neural Network Architecture
319 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 10, No. 8, 2019
the designed network model. Despite the complexity of the [14] an SVM along with spectral reflectance measurements
acquired seedlings scenes due to illumination variations, are combined for developing a corn/silverbeet (as crop-weed)
resemblances between weeds and crops at premature stages of differentiation system. The intensities of the reflectance of
growth, and soil texture intricacies, the CNN succeeded to laser beams off soil and vegetation at three wavelengths are
achieve high classification performance. Hence, this work gathered by a weed sensor. These reflectance measurements
aims specifically to develop a framework for crop-weed are used to compute the Normalized Difference Vegetation
discrimination system that applies the CNN to classify 12 Indices (NDVIs). Two experiments are performed; in the first
crops and weeds plant species and compare the proposed one, the obtained NDVI values are fed to an SVM to achieve
seedling classification system with other state-of-the-art the classification process, while in the second one, the raw
techniques. reflected intensities are provided to the SVM for crop-weed
discrimination. Strothmann et al. [15] proposed a crop-weed
This article is structured as follows: Section 2 presents the
discrimination system based on in-field-labeling. A
related work. Section 3 details the CNN architecture devoted
multiwavelength laser line profile (MWLP) approach is used
to developing the proposed deep plant seedling classification
to scan plants and obtain spectral reflection intensities,
system. Section 4 describes and discusses the experimental
scattering information at several wavelengths and 3D data.
results. Finally, Section 5 debates the conclusion and future
The spectral features are applied for separating soil and
work.
biomass, while the 3D surface features are exploited for
II. RELATED WORK discriminating crops and weeds.
Seedlings classification is a discipline that has got a The study of [16] investigates the classification of maize,
substantial prominence in precision agriculture, since it weeds, and soil by training CNN to make a pixel-wise
permits for distant observation to the fields, providing a classification. The generated CNN is based on a modified
foundation for more efficacious weed control. Fine-grained architecture of the VGG16 at which the output layer is a
weed control considerably depends on the accuracy of the convolutional layer instead of a fully connected layer.
classification process, so as the crops would not be damaged Eventually, semantic segmented images are obtained. Zhang
when treating the weeds. Accordingly, misclassification will et al. [9] proposed a system for identifying broad-leaf weeds
possess a direct impact on crop yield. in the pasture. Traditional machine learning techniques and
deep learning approaches are investigated and compared. The
In literature, classification of crop and weed species may results reveal that deep learning technique using CNN
be developed through two strategies. The first strategy is achieved high accuracy and robustness in detecting weeds in
based on segmenting images into green and soil regions and real-world pasture environments. The work [17] submitted an
extracts features from green patches and finally uses approach to classify the species of weeds and crops by
classification techniques to obtain the specified classes. The employing CNN technique. The developed CNN is based on
second strategy, on the other hand, relies on implementing a hybrid network of AlexNet and VGGNET. The
deep learning techniques for plant seedling classification. normalization notion is stimulated from AlexNet; while the
The work of [11] presented a method for classifying plant filters' depth is selected based on VGGNET. Furthermore,
seedlings. This method aimed to improve the classification incremental learning to learn new plant species is applied in
performance by consolidating the classification of the whole this work.
plants and the individual leaves. Thus, leaves are first
separated from the plants then features are extracted from III. PROPOSED CNN ARCHITECTURE
both the whole plants and the segmented leaves. The In this work, CNN is adopted for plant seedling
classification process is performed for the leaves and plants, classification to automatically discriminate between weed
and finally, Bayes belief integration is used to fuse the species and crops at early growth stages. The proposed CNN
classification results. Bakhshipour and Jafari [12] applied two consists of an input layer, hidden layers, and an output layer.
significant pattern recognition approaches; artificial neural The original seedling images are all equally resized to
networks (ANN) and support vector machine (SVM), to 128x128 pixels (this has been specified empirically such that
separate the weeds from the sugar beet plants using shape to get satisfactory performance with acceptable processing
features. The shape features comprise Fourier descriptors and speed) and fed to the input layer. The hidden layers consist of
moment invariant features. Four species of prevalent weeds in 5 stages of learning layers, as illustrated in Fig. 1. The
the sugar beet fields were examined. The results indicate that utilized filters are all of kernel size 3x3 with a number of
SVM slightly outperforms the ANN. In [13] the authors filters 32, 64, 128, 256 and 1024 for each convolutional layer
developed a system vision technique relied on video within each stage, respectively.
processing as well as a hybrid ANN and ant colony algorithm
The entire convolutional layers are associated with
classifier for assorting potato plant and three weed species.
Rectified Linear Units (ReLU) layers, which apply the
Texture features, obtained from the gray level co-occurrence
function f(x) = max (0, x) to the whole values of the input
matrix (GLCM) and the histogram, moment invariants, color
image. Thus, the negative input elements are set to 0. This
features, and shape features are extracted. Then, the Gamma
decreases the training time and provides nonlinear
test is used to select the significant features.
rectifications, which escalates the nonlinear characteristics of
Furthermore, spectral reflectance measurements are used the model and the whole network without impacting the
for discriminating between crops and weeds [14] and [15]. In receptive values of the convolutional layer [18].
320 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 10, No. 8, 2019
Normalization
Cov. layer: 512 x 3 x 3
output: 13 x 13 x 512
Global average pooling
Cov. layer: 128 x 3 x 3
output: 67 x 67 x 128 max_pooling
output: 6 x 6 x 512
Fully connected layer (n)
max_pooling
output: 33 x 33 x 128 Normalization
1 2 ... n
321 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 10, No. 8, 2019
TABLE. I. PLANT SEEDLINGS DATASET DETAILS hyperparameters. On the other hand, in the test phase, the test
Training set Test set Total images are utilized to afford an unbiased assessment of the
Class Species
images images images final model that is fitted on the training dataset.
Black grass (Alopecurus
1
myosuroides)
263 46 309 Extensive experimentations are achieved to evaluate the
proposed method by comparing it with existing methods.
Charlock (Sinapis
2
arvensis)
390 62 452 These experiments are conducted by considering a different
number of species. The first experiment involves 7 species
Cleavers (Galium
3
aparine)
287 48 335 (cleavers, chickweed, wheat, maize, scentless mayweed,
Chickweed (Stellaria
Shepherd’s purse and, sugar beet), the training set comprises
4
media)
611 102 713 2472 images and the test set includes 430 images. The second
Common wheat
experiment encompasses 8 species (charlock, cleavers,
5
(Tricicum aestivum)
221 32 253 chickweed, fat hen, maize, scentless mayweed, Shepherd’s
Fat hen (Chenopodium purse, and sugar beet), the training phase contains (3116)
6 475 63 538 images, whereas the testing phase holds (523) images.
0album)
The third experiment comprises 10 species (black grass,
charlock, cleavers, chickweed, fat hen, loose silky-bent,
maize, scentless mayweed, Shepherd’s purse, and sugar beet),
the training phase contains (4027) images, whereas the testing
phase holds (683) images.
Finally, in the fourth experiment 12 species are considered
and are divided into (4738) train images and (801) test
images.
The proposed CNN is randomly initialized, and then it is
trained for performing the classification process and indicated
a convolutional model. The weights of the CNN are updated
utilizing the training set. The final weights are selected using
the validation set. For each iteration, the training and
validation errors are computed, and the weights that achieve
the minimum validation error are chosen.
7 Intel Loose
(R) Core (TM)
silky-bent i7-4702MQ
(Apera 648 CPU 114@ 2.20GHZ 762 (8
GB RAM) processor is utilized for implementation. Python
spica-venti)
(Keras
8 library)
Maize (Zeainstalled
mays) in Anaconda
221 on the36operating 257
system
Windows 10 is employed as a software tool for application.
9 Scentless mayweed 516 91 607
D. Results
(Tripleurospermum
Tableperforatum)
II presents the average validation performance of
the
10 proposed seedling
Shepherd's purse classification
231 for 43
7, 8, 10 and
274 12
species. (Capsella
It can bursapastoris)
be noticed from Table II that the validation
accuracy reaches approximately 99% for all tested number of
11 Small-flowered 490 86 576
species, Cranesbill
the validation
(Geranium
recall, precision, and F1-score are
roughly pusillum)
98 % for 7 and 8 species while the validation recall
reaches
12
approximately 92% and
Sugar beet (Beta 385
93% for78 the 10 and
463
12
species, vugaris)
respectively. In addition, the validation precision
Total 4738 801 attains5539
nearly 94%
and 95% for(1)
10 and 12
species,
respectively,
whereas the
F1-score is
about 93%
for both 10
and 12
species.
On the
other hand,
322 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 10, No. 8, 2019
Table III illustrates the average test performance. The results TABLE. IV. THE CONFUSION MATRIX FOR 7 SPECIES (1. CLEAVERS,
reveal that the average test accuracy, recall, precision, and F1- 2. CHICKWEED, 3. WHEAT, 4. MAIZE, 5. SCENTLESS MAYWEED, 6.
scale are approximately 99% for 7 species. For 8 species, the SHEPHERD’S
PURSE AND 7. SUGAR BEET)
average test accuracy and recall are nearly 98% while the TABLE. V. THE CONFUSION MATRIX FOR THE 8 SPECIES (1. CHARLOCK,
precision and F1-score are almost 99%. Furthermore, when 2. CLEAVERS, 3. CHICKWEED, 4. FAT HEN, 5. MAIZE, 6. SCENTLESS
using 10 species, the average test accuracy, recall, precision MAYWEED, 7. SHEPHERD’S PURSE AND 8. SUGAR BEET)
and F1-scale reach roughly 95%, 93%, 97 and 94, Predicted classes
respectively. Finally, as testing the 12 species, the average test
accuracy and F1-score has attained approximately 94%, while
1 2 3 4 5 6 7 8
the average test recall and precision captured nearly 93% and
95%, respectively.
True classes
1 60 1 0 0 0 0 1 0
TABLE. II. THE AVERAGE VALIDATION PERFORMANCE OF THE
2 0 48 0 0 0 0 0 0
PROPOSED SYSTEM FOR 7, 8, 10 AND 12 SPECIES
Number of Accuracy Recall Precision F1-score 3 0 0 101 0 0 1 0 0
Loss
plant species (%) (%) (%) (%) 4 0 0 2 61 0 0 0 0
7 species 98.63 97.9 98.2 97.79 0.0701 5 0 0 0 0 36 0 0 0
8 species 98.91 97.56 97.68 97.58 0.051
6 0 0 3 0 0 88 0 0
10 species 98.76 92.24 94.02 92.73 0.0489
12 species 99.01 92.64 94.86 92.93 0.0565 7 0 0 0 0 0 1 42 0
TABLE. III. THE AVERAGE TESTING PERFORMANCE OF THE PROPOSED 8 0 0 0 0 0 0 0 78
SYSTEM FOR 7, 8, 10 AND 12 SPECIES
TABLE. VI. THE CONFUSION MATRIX FOR THE 10 SPECIES (1. BLACK
Number of Accuracy Precision F1-score GRASS, 2. CHARLOCK, 3. CLEAVERS, 4. CHICKWEED, 5. FAT HEN, 6. LOOSE
Recall (%)
plant species (%) (%) (%) SILKY-BENT, 7. MAIZE, 8. SCENTLESS MAYWEED, 9. SHEPHERD’S PURSE
7 species 98.61 98.63 98.92 98.77 AND
10. SUGAR BEET)
8 species 98.28 98.38 98.59 98.47
10 species 94.88 93.22 96.47 94
12 species 94.38 93.1 94.83 93.57
Moreover, the confusion matrices of the proposed seedling
classification method for 7, 8, 10, 12 species are displayed in
Table IV, Table V, Table VI, and Table VII, respectively.
Predicted classes
1 2 3 4 5 6 7
1 48 0 0 0 0 0 0
True classes
2 0 100 0 0 2 0 0
3 1 0 31 0 0 0 0
4 0 0 0 36 0 0 0
5 0 1 0 0 89 0 1
6 0 0 0 0 1 42 0
7 0 0 0 0 0 0 78
TABLE. VII. THE CONFUSION MATRIX FOR THE 12 SPECIES (1. BLACK GRASS, 2. CHARLOCK, 3. CLEAVERS, 4. CHICKWEED, 5. WHEAT, 6. FAT HEN, 7. LOOSE
SILKY-BENT, 8. MAIZE, 9. SCENTLESS MAYWEED, 10. SHEPHERD’S PURSE, 11. SMALL-FLOWERED CRANESBILL AND 12. SUGAR BEET)
Predicted classes
1 2 3 4 5 6 7 8 9 10 11 12
1 20 0 0 0 1 0 25 0 0 0 0 0
True classes
2 0 60 0 0 0 0 0 0 0 0 2 0
3 0 0 48 0 0 0 0 0 0 0 0 0
4 0 0 0 100 0 0 0 0 2 0 0 0
5 1 0 0 0 31 0 0 0 0 0 0 0
6 0 0 0 2 1 60 0 0 0 0 0 0
7 6 0 0 0 0 0 108 0 0 0 0 0
8 0 0 0 0 0 0 0 36 0 0 0 0
9 0 0 0 3 0 0 0 0 88 0 0 0
10 0 0 0 1 0 0 0 0 1 41 0 0
11 0 0 0 0 0 0 0 0 0 0 323
86 | P a g0 e
www.ijacsa.thesai.org
12 0 0 0 0 0 0 0 0 0 0 0 78
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 10, No. 8, 2019
Predicted classes species like black grass and loose silk bent. However, the
submitted seedling classification approach achieves an
1 2 3 4 5 6 7 8 9 10 average test accuracy of 94.38% for the whole 12 species.
1 21 0 0 0 0 25 0 0 0 0 TABLE. VIII. COMPARISON OF THE PROPOSED METHOD AND EXISTING
True classes
METHODS
2 0 62 0 0 0 0 0 0 0 0
Method Number of species Accuracy
3 0 0 47 0 0 0 0 0 0 1 [21] 7 95.8
4 0 0 0 100 0 0 0 2 0 0 [17] 7 98.21
5 0 0 0 2 61 0 0 0 0 0 Proposed method 7 98.61
6 2 0 0 0 0 112 0 0 0 0 [11] 8 96.7
[17] 8 98.23
7 0 0 0 0 0 0 36 0 0 0
Proposed method 8 98.28
8 0 0 0 2 0 0 0 89 0 0
[17] 12 93.64
9 0 0 0 0 0 0 0 1 42 0 Proposed method 12 94.38
10 0 0 0 0 0 0 0 0 0 78 Over and above, the proposed system performance
E. Discussion slightly exceeds that of [17] for 7 and 8 species, and
In this section, an inclusive debate for the realized significantly outperforms it for 12 species. Furthermore, it
outcomes and comparison with state-of-art are exhibited. involves 6 convolutional layers and 3 fully-connected layers,
whereas our proposed CNN comprises 5 convolutional layers
As considering 7 species (3 crops and 4 weeds) and 8 and one fully connected layer. Adding more layers extends
species (2 crops and 6 weeds) experiments, the evaluation the number of hyperparameters, ergo the complexity of the
results depict that the proposed technique has effectively and system. Thus, the submitted CNN architecture is much
efficiently classified the experimented species. The suggested simpler and provide superior performance.
system achieved about 99% average accuracy, recall,
precision, and F1-score of for 7species, and approximately V. CONCLUSION
98% average accuracy and recall, as well as nearly 99% In this article, a CNN architecture is developed to
average precision and F1-score for 8 species. discriminate between plant images of crop species and weed
On the other hand, for the 10 (2 crops and 8 weeds) and species at several early growth stages. The proposed CNN has
12 (3 crops and 9 weeds) species, the performance evaluation achieved an enhancement in performance owing to the
is relatively less than that for 7 and 8 species. Despite the combination of the presence of the normalization layer, the
performance reduction, the classification evaluation still high global average pooling layer and the choice of the depth of
and the empirical results manifest the potency and capability the filters. The results revealed that the elaborated CNN has
of the proposed system in discriminating among the various an encouraging performance towards building a weed control
species. The submitted method, for 10 species, obtained system which is a step to precision agriculture. The proposed
average accuracy, recall, precision and F1-score of about CNN model achieved average accuracy, recall, precision, and
95%, 93%, 97%, and 94%, respectively. Additionally, for 12 F1-score of 94.38, 93.1, 94.83, and 93.57, respectively, for
species, the system attained approximately 94%, 93%, 95% discriminating 12 plant seedling (3 crops and 9 weeds).
and 94% average accuracy, recall, precision, and F1-score, Furthermore, its architecture is simpler than other existing
respectively. It is apparent from Table VI and Table VII that CNN models utilized for plant seedling classification.
Black grass and Loose silky-bent majorly affect the results Additionally, its performance is much better than other
due to their high similarity and insignificant differences at existing methods.
early-stage growth, and they are hard to be distinguished even In the proposed scheme, images that comprise single plant
by human eyes. As for other classes, the proposed model species are classified, thus, for classifying images with many
proved to be reliable and effective. plant species, the segmentation stage may be added to the
To scrutinize the performance of the proposed technique, a system.
comparison is performed with some state-of-the-art. The In addition, the proposed technique may be expanded to
proposed method is compared with the existing methods [21], incorporate new plant species. Besides, the proposed
[11] and [17]. technique may be implemented as a part of an IoT system for
The average accuracy of the existing seedling approaches weed control, which can help to directly apply herbicides on
and the proposed work are quoted in Table VIII. It may be the weeds without harming crops.
observed that the proposed deep seedling classification REFERENCES
system outperforms significantly [21] and [11]. Furthermore, [1] The Role of International Institutions in Economic Development and
it can be noticed that, in [11], the average accuracy for Poverty Reduction in the Developing World, Food and agriculture
Cleavers and Fat hen classes are 81.4% and 81.6%, Organization of the United Nations. Rome, 2018.
respectively, which are relatively weak. Yet, the proposed URLhttp://www.fao.org/3/I9900EN/i9900en.pdf
technique has developed it to 100% and 96.83%, respectively. [2] L.O.L.A. Silva, M.L. Koga, C.E. Cugnasca, and A.H.R. Costa,
Moreover, both works [11] and [21] skipped pretty similar “Comparative assessment of feature selection and classification
324 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 10, No. 8, 2019
techniques for visual inspection of pot plant seedlings”, Computers and network training by reducing internal covariate shift”, The 32nd
Electronics in Agriculture vol. 97, pp. 47–55, 2013. International Conference on Machine Learning (ICML 2015), Lille,
[3] D. Nkemelu, D. Omeiza, and N. Lubalo, “Deep convolutional neural France, July 6 –11, 2015.
network for plant seedlings classification”, Computer Vision and [20] T. Giselsson, R. Jørgensen, P. Jensen, M. Dyrmann, and H. Midtiby, “A
Pattern Recognition, arXiv: 1811.08404, 2018. public image database for benchmark of plant seedling classification
[4] A. Dhomne, R. Kumar, and V. Bhan,“Gender recognition through face algorithms”, Computer Vision and Pattern Recognition,
using deep learning”, Procedia Computer Science, vol. 132, pp. 2-10, arXiv:1711.05458, November, 2017.
2018. [21] P. Christiansen, and M. Dyrmann, “Automated classification of
[5] M. Radovic, O. Adarkwa, and Q. Wang, “Object recognition in aerial seedlings using computer vision”, Technical report, Aarhus University,
images using convolutional neural networks”, Journal of Imaging, vol. Aarhus, 2014.
3, no. 21, 2017.
[6] N. M. Zayed and H. A. Elnemr, “Intelligent Systems for Healthcare
Management and Delivery”, chapter 5, “Deep Learning and Medical
Imaging”, pp. 101-147, 2019. [7] P. Bonnet, H. Goëau, S. Hang, M.
Lasseck, M. Šulc, V. Malécot, P. Jauzein, J. Melet, C. You, and A. Joly,
“Plant identification: experts vs. machines in the era of deep learning:
deep learning techniques challenge flora experts”. In: A. Joly, S.
Vrochidis, K. Karatzas, A. Karppinen, P. Bonnet (eds.). Multimedia
Tools and Applications for Environmental & Biodiversity Informatics.
Multimedia Systems and Applications. Springer, Cham, Chapter 8,
pp.131-149, 2018.
[8] Z. Qiu, J. Chen, Y. Zhao, S. Zhu, Y. He, and C. Zhang, “Variety
identification of single rice seed using hyperspectral imaging combined
with convolutional neural network”, Applied Sciences, vol. 8, no. 2,
212, 2018.
[9] W. Zhang, M. F. Hansen, T. N. Volonakis, M. Smith, L. Smith, J.
Wilson, G. Ralston, L. Broadbent, and G. Wright, "Broad-Leaf weed
detection in pasture," 2018 IEEE 3rd International Conference on
Image, Vision and Computing (ICIVC), Chongqing, pp. 101-105, 2018.
[10] K. P. Ferentinos, “Deep learning models for plant disease detection and
diagnosis”. Computers and Electronics in Agriculture, vol. 145, pp.
311– 318, 2018.
[11] M. Dyrmann, P. Christiansen, and H. S. Midtiby, “Estimation of plant
species by classifying plants and leaves in combination”, Journal of
Field Robotics, vol. 35, pp. 202–212, June 2017.
[12] A. Bakhshipour, and A. Jafari, “Evaluation of support vector machine
and artificial neural networks in weed detection using shape features”,
Computers and Electronics in Agriculture, vol. 145, pp. 153–160, 2018.
[13] S. Sabzla, Y. A. Gilandeha, and H. Javadikia, “Developing a Machine
Vision System to Detect Weeds from Potato Plant”, Journal of
Agricultural Sciences, vol. 24, pp. 105-118, 2018.
[14] S. Akbarzadeh, A. Paap, S. Ahderom, B. Apopei, and K. Alameh, “Plant
discrimination by Support Vector Machine classifier based on spectral
reflectance”, Computers and Electronics in Agriculture, vol. 148, pp.
250–258, 2018.
325 | P a g e
www.ijacsa.thesai.org