0% found this document useful (0 votes)
4 views14 pages

Revolutionizing Agriculture Machine and Deep Learning Solutions For Enhanced Crop Quality and Weed Control

This research presents machine learning and deep learning techniques to improve weed detection and classification in agriculture, addressing the challenges posed by weeds that reduce crop productivity. The study utilizes various algorithms, including YOLOv8m for weed identification and SVM, Random Forest, and ANN for classification, achieving high accuracy rates of up to 99.5%. The findings highlight the potential of these technologies to enhance crop quality and sustainability while reducing environmental impact and costs associated with traditional weed control methods.

Uploaded by

umamanasmanu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views14 pages

Revolutionizing Agriculture Machine and Deep Learning Solutions For Enhanced Crop Quality and Weed Control

This research presents machine learning and deep learning techniques to improve weed detection and classification in agriculture, addressing the challenges posed by weeds that reduce crop productivity. The study utilizes various algorithms, including YOLOv8m for weed identification and SVM, Random Forest, and ANN for classification, achieving high accuracy rates of up to 99.5%. The findings highlight the potential of these technologies to enhance crop quality and sustainability while reducing environmental impact and costs associated with traditional weed control methods.

Uploaded by

umamanasmanu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Received 19 December 2023, accepted 7 January 2024, date of publication 17 January 2024, date of current version 25 January 2024.

Digital Object Identifier 10.1109/ACCESS.2024.3355017

Revolutionizing Agriculture: Machine and Deep


Learning Solutions for Enhanced Crop
Quality and Weed Control
SYED MUJTABA HASSAN RIZVI 1 , ASMA NASEER 1, SHAFIQ UR REHMAN 2,

SHEERAZ AKRAM 2 , AND VOLKER GRUHN3


1 Department of Computer Science, National University of Computer and Emerging Sciences, Lahore 54770, Pakistan
2 College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
3 Department of Software Engineering, Universität Duisburg-Essen, 45141 Essen, Germany
Corresponding author: Shafiq Ur Rehman ([email protected])
This work was supported by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) under Grant
IMSIU-RP23042.

ABSTRACT Agricultural systems are being revolutionized due to emerging technologies that aim to make
improvements in the traditional agriculture system. The major goal is not just to enhance agricultural
output per hectare but also to enhance crop quality while protecting the natural environment. Weeds
pose a significant threat to crops as they consume nutrients, water, and light, thereby reducing crop
productivity. Spraying the entire field uniformly to control weeds not only incurs high costs but also has
adverse environmental effects. To address the limitations of conventional weed control methods, in this
research, Machine Learning (ML) and Deep Learning (DL) based techniques are proposed to identify
and categorize weeds in crops. For ML-based techniques, several statistical and texture-based features are
extracted, including central image and Hu moments, mean absolute deviation, Shannon entropy, gray level
co-occurrence matrix (GLCM) and local binary patterns (LBP), contrast, energy, homogeneity, dissimilarity,
correlation, and summarized local binary pattern histogram. YOLOv8m is employed to identify weeds and
for weed classification, features extracted from two standard benchmark datasets, CottonWeedID15 and
Earlycrop-weed are fed to Support Vector Machine (SVM), Random Forest, and Artificial Neural Network
(ANN) while employing Synthetic Minority Oversampling Technique (SMOTE) to balance the classes.
In addition to ML-based techniques, Deep learners such as VGG16, VGG19, Xception, DenseNet121,
DenseNet169, DenseNet201, and ConvNeXtBase are trained on raw data with balanced classes for
automated feature extraction and classification. Among the ML-based techniques, SVM with a polynomial
kernel achieves 99.5% accuracy on the early crop weed dataset, and Artificial neural network attains 89%
accuracy on the Cottonweedid15 dataset. Meanwhile, the combined employment of ConvNeXt and Random
Forest results in the highest accuracy among DLs, specifically 98% on the early crop weed dataset and
90% on the Cottonweedid15 dataset. The high accuracy achieved underscores the practical viability of these
methods, offering a sustainable and cost-effective solution for modern agriculture.

INDEX TERMS ConvNeXtBase, DenseNet, generative AI, smart agriculture, VGG, Xception, YOLOv8.

I. INTRODUCTION will reach 9 billion. Due to the increase in the population,


Every year, the world’s population is growing at a rate of the demand of food is also increasing. To meet this demand,
1.09%. By 2050, it is projected that the global population agricultural production needs to be increased by 70% [1].
However, the agricultural sector faces many challenges, such
The associate editor coordinating the review of this manuscript and as lack of cultivable land, saline land, barren land, climate
approving it for publication was Rongbo Zhu . change, water scarcity, as well as weeds in crops. Artificial
2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
VOLUME 12, 2024 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ 11865
S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

intelligence can play crucial role in mitigating these issues classification accuracy by 0.51% to 0.89% by fine-tuning four
in agriculture with the aid of computer vision, machine DL models [7].
learning, and deep learnings [2]. According to their leaf Major Research Contributions: Our research addresses
shape, weeds are divided into three primary groups: broad- the challenges in weed detection and classification by
leaf weeds, grasses, and sedges. Weeds are non-essential focusing on the features that can produce upstanding ML
plants that are found in different parts of the crop. These and DL-based models. The research makes significant
weeds not only damage the crop but also provide shelter contributions to the smart agriculture field by introducing
and breeding grounds for various pests. As per European novel feature-based approaches and generating a comparative
Crop Protection (ECPA), weeds and pests cause about a insight into the effectiveness of features for weed identifica-
40% loss in crop yields. Therefore, many methods have been tion and classification, and proposing solutions for removing
devised to eliminate weeds from the crop to avoid damage. weeds.
One of these methods is to remove weeds from the crop by • A meticulous effort has been made to annotate the ‘‘Cot-
hand [3]. This method demands lots of hard work and time. tonWeedID15’’ dataset. The dataset contains images of
Another method is to remove weeds with the help of specially various weeds commonly found in cotton fields. These
designed mechanical devices. These devices are moved images are scrupulously annotated with rectangular
between the rows of crops. however, these devices are not regions of interest (ROI) markings and are released with
workable on crops that are not grown in rows. One approach this research [8]. By providing this annotated dataset, the
to eliminating weeds in crops is to use chemical sprays. research serves as a valuable resource for the research
Our farmers spray uniformly across the entire field to keep community for future investigations in the field of weed
weeds at bay. Spraying in fields uniformly raises production identification and control.
costs and has negative environmental consequences. Fig 1 • Our research makes a significant contribution by
shows the conventional weed-control techniques. Artificial thoroughly investigating the efficacy of various sta-
intelligence is able to make a weed control system with the tistical and texture features, encompassing simple
help of computer vision, deep learning, and machine learning. moments, Hu moments, GLCM (Gray-Level Co-
Artificial intelligence has both economic and environmental occurrence Matrix), and LBP (Local Binary Pattern),
benefits. The first stage in building an automated weed in addition to exploring the potential of deep learning
removal system is to correctly detect and recognize weeds by features. This exploration of diverse feature sets is
the automated weed removal system [4]. Weed classification crucial for advancing the understanding of feature
and detection in crops are challenging problems because extraction methods and computation in the context
most of the time, weeds and crops have the same color of weed detection.Models and feature sets yielded
and texture. It is difficult to differentiate between them. results of more than 88% on the testing set of both
Due to different sun angles, lighting varies on the surface datasets.
of weeds and crops, producing illumination and shadow, • U2Net is used in a novel way to remove the background
which creates more difficulty in detection and classification. from the images and a rigorous comparison is made to
Image capture, pre-processing of pictures, feature extraction, evaluate the performance of the learners on images with
and weed detection and classification are the four main a background and without a background.
phases of a typical weed detection system [5]. In recent • Our research contributes by disseminating a vital aware-
years, with the advancement of science, Technology, and ness message to the world’s population, highlighting the
artificial intelligence, both have experienced rapid growth. transformative potential of deep learning technology in
For solving classification and detection problems, many agriculture. By emphasizing the benefits of adopting
new computer vision, machine learning, and deep learning deep learning techniques, we aim to inspire and educate
algorithms have been introduced which are only able to work farmers about the potential improvements in crop yield
due to the graphics processing unit. Deep learning models, and overall prosperity that can be achieved through the
such as deep neural networks, demand high computation. integration of advanced technologies. This awareness
But with the help of transfer learning, we are able to initiative is a proactive step toward bridging the
reduce computation. In transfer learning, we use already gap between technological advancements and practical
learned weights that are transferred from another problem- implementation in the agricultural sector, fostering a
related domain. Transfer learning (also known as transferring more informed and tech-savvy farming community in
deep learning model’s weights) solely entails fine-tuning the world.
model parameters using additional datasets in the target The remaining sections of the article are structured as
domain. Transfer learning is really helpful in achieving follows: Section II probes into related work and explores
good results with less computation. The authors of [6] existing approaches relevant to the topic while section III
discovered that optimizing DL models on agricultural outlines the proposed methodology with Fig 2, detailing
datasets helps decrease training epochs while enhancing the techniques. Section IV discusses the experimental setup
model accuracy. On the early-crop-weeds Dataset [6] along with results obtained from the multiple experiments.
and the Plant Seedlings Dataset, they enhanced the Finally, the findings and implications of the research are

11866 VOLUME 12, 2024


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

thoroughly discussed in section V, and the conclusion is and in a noisy test set, the performance of inception was
presented in Section VI. 87.05%.
Many researchers used image processing techniques for
weed identification. Reference [10] provided a review of
image processing techniques. After collecting the dataset,
then researchers apply preprocessing on the dataset, and
after preprocessing, image enhancement algorithms are
used. On enhanced images segmentation algorithms are
used, which are threshold-based or learning-based. Then
researchers extract features from the binary image. Feature
extraction is based on morphology, spectral property, visual
texture, and spatial context. Then these features are fed in
machine learning or deep learning models and classified the
weed images. Deep learning models such as GANS and CNN
FIGURE 1. Weed removal approaches. were also discussed. Agriculture’s massive dataset challenges
require deep learning to solve.
Turf grass is used in athletics grounds, lawns, golf courses,
II. RELATED WORK and many other areas. So [11] proposed methods to avoid
Machine learning and deep learning techniques are used in weeds in turf grass. They used a Sony DSC-HX1 camera
the detection and recognition of weeds in crops, producing for dataset collection. They took images from multiple
astonishing results in precision farming. Reference [3] golf courses (Riverview, Sun City, Tampa, and Miami).
proposed two techniques for classification based on weed Hydrocotyle spp., Hedyotis cormybosa, and Richardia scabra
density. They used three classes, with each class representing were the weeds used in their datasets. They did training with
weed density. In their first technique, after creating a density- one weed and with turf grass and also with multiple weeds
based dataset, they converted each image to grayscale, then with turf grass. For training, they used VGGnet, Googlenet,
reduced the image size to reduce computing time. Grey level and Detectnet architectures.The F1 score values of VGGnet
co-occurences matrix is calculated from the reduced-size and Googlenet for Hydrocotyle spp. are 0.9990 and 0.667 on
images, and features like correlation, contrast, homogeneity, validation dataset. The F1 score values of VGGnet and
and energy are extracted from each grey level co-occurences Googlenet for Hedyotis cormybosa are 0.9950 and 0.7091 on
matrix. They trained a support vector machine model with validation dataset. The F1 score values of VGGnet and
a radial basis kernel and achieved a 10-fold cross-validation Googlenet for Richardia scabra are 0.9911 and 0.6667 on
accuracy of 72.73%. They also conducted a comparison of the validation dataset. For multiple species Googlenet F1
radial bases and linear kernels. The highest accuracy achieved score was 0.72667 and VGGnet F1 score was 96.33 on
with a linear kernel is 51.52%, which is comparatively lower the validation dataset. Many classification and detection
than a radial base kernel. Random forest achieved a 69.70% experiments have been conducted in agriculture as a result
cross validation accuracy after a 10 fold cross-validation of the advent of deep learning algorithms. Reference [12]
with GLCM features. In their second method, they extracted did a survey of deep learning techniques. They conducted
the green channel from the RGB image of a density-based a survey of existing deep learning algorithms for weed
dataset. Then they calculated Mean, variance, kurtosis, and identification and classification in various crops. Deep
skew. They trained a support vector machine model with a learning architectures used in research papers are VGGNet,
radial basis kernel and achieved a 10-fold cross-validation modified Xception, Inception-ResNet, MobileNet, DensNet,
accuracy of 84.85%. ResNet-50, VGG-16, VGG-19, Inception v3, SegNet-512,
To address dataset issues, the generative adversarial SegNet-256, YOLO-v3, tiny YOLO-v3, single-shot detector,
techniques of deep learning are critical for creating synthetic convolutional neural network, segnet, alexnet, deeplab-v3,
images [9] They generate synthetic images with traditional U-net, artificial neural network, VGG-F, VGG-vd-16,
augmentation as well as with a deep convolutional generative AlexNet, U-Net, SegNet, hybrid network, faster R-CNN,
adversarial network. In their work, they use transfer learning ESNet, Joint unsupervised learning deep cluster, LeNET,
to set the weights of a neural network. They took a neural etc. Various crops and weeds are used with various deep
network with ImageNet weights. In their experiment with learning models. Furthermore, they discussed how GANS
DCGAN, they used the PlantVillage dataset. The best FID and synthetic data can also play an important role in
score was achieved after 46,000 iterations, and the FID was catering to the problems of complex patterns in agriculture.
86.93% for synthetic tomato images, and for synthetic black For weed removal applications, High precision is required,
nightshade images, the FID score is 146.85, which was But achieving high precision in agriculture is still a
achieved by DCGAN after 29,500 iterations. In a noisy test challenging problem. [13], they did work with sugar beet
set, the performance of inception-resnet was 89.06%. With fields and four species of weeds, which are named as Pig-
real and synthetic images, the inception F1 score was 98.63, weed, Lambsquarters, Hare’s-ear mustard, and turnip weed

VOLUME 12, 2024 11867


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

(scientifically known as. Amaranthus chlorostachys, model. In their research work [17] they presented a new
Chenopodium album,Conringia orientalis and Rapistrumru- dataset which had 5187 coloured images, captured under
gosum). They used fourier transform and moment features different natural light conditions, that contained images of
with SVM and ANN. The ANN achieved a 99.50% accurate 15 different weed classes (Morning Glory, Carpetweed,
classification of weed. The ANN exhibited an overall Palmer Amaranth, Waterhemp,Purslane, Nutsedge, Eclipta,
accuracy rate of 92.92%. On the other hand, SVM overall Sicklepod, Spotted Spurge, Ragweed, Goosegrass, Prickly
accuracy was 95. 93.33% SVM correctly classified 93.33% Sida, Crabgrass, Swinecress, and Spurred Anoda). This study
of weeds. also evaluates 35 state-of-the-art deep learning models for
In [6], they generated Early-crop-weed dataset, which multi-class weed identification. In total 35 models are trained
includes tomato and cotton crops along with two weed and among all these models the top 5 models that performed
species (black nightshade and velvetleaf). They combined the best were ResNeXt101, RepVGG-B1, RepVGG-B2,
fine-tuned pre-trained convolutional networks (Inception- ResNeXt50, and RepVGG-A2.
Resnet, Densenet, Xception, VGNets, and Mobilenet) with An improved YOLOv5 Convolutional Neural Network is
‘‘traditional’’ machine learning classifiers (Logistic Regres- constructued for Solanum rostratum Dunal detection [18].
sion, Support Vector Machine and XGBoost). They used Solanum rostratum Dunal weed is classified as one of the
transfer learning for training. Their results showed that the most harmful weeds in the US and China. Total 413 images
fine-tuned Densenet with Support Vector Machine combina- of Solanum rostratum Dunal at different stages of growth
tion achieved a micro F1 score of 99.29%. Other architectures were obtained using different devices. YOLOv5 is combined
also achieved more than 95% accuracy. In [14] they purposed with the Convolutional Block Attention Module (CBAM) to
several experimented approaches and explained how to increase the extraction of relevant features while suppressing
fine-tune parameters and extract deep features using deep others. This combination is known as YOLO-CBAM. YOLO-
learning, combining them with machine learning algorithms. CBAM is made up of four parts: input, backbone, neck,
They used four public datasets in their work named as and prediction. The model results in a precision of 0.9036,
flavia, swedish leaf, UCI leaf and plantvillage. They extracted recall of 0.9012 and an average precision of 0.9272. This
features with deep neural networks (AlexNet and VGG-16) research [19] was carried out to identify weeds in the fields
and after extracting features, they applied classic machine of bell pepper. During preprocessing, lighting variations and
learning classifiers (LDA and SVM) for classification. noise were removed, and data augmentation was applied to
In their last experiment, they produced features with AlexNet enhance quality and avoid overfitting. AlexNet, GoogLeNet,
and VGG 16. Then they combined the features which are InceptionV3 and Xception are those Convolutional Neural
produced by AlexNet and VGG16. Then they used end- Network architectures that were applied during this research.
to-end RNN on these featues, and after training, produced All the models provided results with 94.5% - 97.7% of
classification results on test data. All their experiment accuracy. Overall, InceptionV3 provided the highest accuracy
produced more than 90% classification accuracy. of 97.7%. In weed identification tasks, speed, computation
Broadleaf crops and weeds that also have broadleaf make time, accuracy, and memory are very noticeable things [20].
it more difficult to identify broadleaf weeds inside broadleaf In their work, they focused on such things and used
crops [15]. They used wheat and weed species (cleavers, lightweight, deep learning models for weed identification.
crickweed, and shepherds purse). For weed detection, they Using the SLIC super pixel technique, images were divided
used CenterNet2, Faster R-CNN, TridentNet, VFNet, and into 15336 segments: 3249 for soil, 7376 for soybeans,
YOLO version 3. For weed classification, they used Alexnet, 3520 for grass, and 1191 for broadleaf weeds. For weed
DenseNet, ResNet, and VGGNet. On weed detection, YOLO identification, they used mobilenetv2, resnet50, and three
v3 achieved the highest F1 score on validation as compared custom models. The 5-layer CNN design has the lowest
to other models, which is 0.65. VGGNet and DenseNet F1 latency and memory utilisation (1.78 GB and 22.245 ms,
scores are 1, which is higher than as compared to other mod- respectively), as well as the highest detection accuracy
els. In another work [16], they did a comparative performance 97.7%.
analysis of 3 image classification models that were trained for In this research [21] optimization algorithms(Adagrad,
classifying various species of weed, as well as the detection AdaDelta, Adaptive Moment Estimation (Adam), and
model performance developed to detect and classifying weed Stochastic Gradient Descent (SGD)) were used with
species. The dataset contain 462 RGB photos of early deep convolution neural networks (AlexNet, GoogLeNet,
season weeds commonly found in corn and soybean crops VGGNet, and ResNet). VGGNet is particularly designed for
(redroot pigweed, gigantic ragweed, foxtail, and cocklebur). small convolution kernels to limit the number of neurons
There are 181 images of redroot pigweed, 173 images of and number of parameters. ResNet (Residual Network) is
gigantic ragweed,73 images of foxtail, and 35 photographs used to fix the degradation problem for deep networks by
of cocklebur. They used models named Resnet50, VGG16, using residual learning to train the deeper networks. For the
and inception for image classification. With an accuracy best performing input image size, the classification accuracy
of 98.90, VGG16 was the best performing classification hierarchy, from lowest one to highest one, was VGGNet,

11868 VOLUME 12, 2024


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

GoogLeNet, AlexNet ResNet. The study [22] shows how the parameters used in the grey level co-occurrence matrix
we can use Siamese neural networks to solve large dataset for feature extraction. Points and radius were the parameters
problems. The Siamese neural network with convolutional that were used in the local binary pattern. They experimented
layers was used for training. The support dataset contains with different numbers of features, and SVM produced the
1,5,10,15, and 20 images of each type, while the query best results with 90% accuracy, on 55 features (LBP8-1;
contains 40 images of each. Support datasets were used to 5 LBP162; 14 LBP24-3; 15 contrast; 15 dissimilarity; and
fine tune the SNN. Then it was evaluated on a testing set, 4 correlation).
and further enhancement in accuracy was observed as the Various models and algorithms exhibit distinct accuracies
accuracy jumped to 70.1% and 70.0% from 67.5% and 66.6% when applied to diverse datasets, each necessitating varying
for the validation and testing data sets, respectively. computational resources. It is imperative to delve into the
In this paper [23], the main purpose was to train Deep intricacies of these models and algorithms, meticulously
Convolutional Neural Networks (DCNNs). Four DCNNs analyzing their features to gauge the requisite computational
(GoogLeNet, ShuffleNet, MobileNet, and VGGNet) were power. Developing methodologies to address weed-related
assessed to find and differentiate weeds that are growing in challenges with efficient computations is paramount. Exist-
bermudagrass turf. Both VGGNet and ShuffleNet demon- ing studies often overlook the nuanced impact of background
strated exceptional overall accuracy in the validation process, areas on the accuracy of weed and crop identification
with values equal to or greater than 0.999. SE-YOLOv5x under varying lighting conditions. A notable research gap
was first time tested on lettuce crops and weeds dataset [24]. lies in understanding how the background area influences
The dataset they used in their work had five kinds of weeds the performance of different models. A comprehensive
and one lettuce crop. For classification, SVM, SE-YOLOv5x exploration of this aspect is essential for advancing the
surpassed YOLOv5x, SSD (VGG), SSD (Mobilenetv2), precision and applicability of weed detection methodologies
Faster-RCNN (Resnet50), and Faster-RCNN (VGG) are Overcoming variations in lighting conditions, diverse weed
used. SE-YOLOv5x demonstrates superior performance in species, and the demand for expansive annotated datasets
the classification of lettuce and weeds. The aim of [25] was is pivotal. Particularly in countries, where traditional agri-
to recognize plants in UAV images using the transformer’s cultural approaches persist, there is a pressing need to
architecture. The dataset was divided into 5 classes: weed, raise awareness about modern technologies. This not only
beet, off-type beet, parsley, and spinach. Each class contains promises increased yields and profitability but also advocates
3200 to 4000 images, except off-type beets, which only have for environmentally sustainable practices by discouraging
653 samples. Random rotations and flips were performed the excessive use of herbicides. Bridging this awareness
so the total dataset contains 19265 images. EfficientNet B0, gap would contribute significantly to advancing agricultural
EfficientNet B1 and ResNet 50 were the conventional neural practices in the world.
networks that were applied to the dataset and provided a
F1-score of 98.7%, 98.9% and 99.2% respectively. A deep III. PROPOSED METHODOLOGY
learning model named the original generative adversarial
network designed by Ian Goodfellow [26] is proposed.
In their network, they used two learning models: one called
the generator, and the other called the discriminator. They
trained them in an adversarial process. Generators try to
fool discriminators, and discriminators try to classify fake
and real data. Finding a discriminator with the highest
classification efficiency and a generator that confuses the
discriminator the most is the method for training a GAN
model. This first architecture of the GAN model is the
next step towards augmentation [27]. This paper provides an
overview of GAN’s architectural evolution and its application
in agriculture. They evaluate how GAN’s architecture plays
a role in weed detection, postharvest detection of fruit
defects, plant phenotyping, plant health conditions, animal
farming, and aquaculture. Handcrafted features are used
with machine learning and automated features through deep FIGURE 2. High level methodology diagram.
learning [28]. For knowing which features produce better
results. For handcrafted features, they used a local binary
pattern and a grey level co-occurrence matrix. From the A. DATASETS
grey level co-occurrence matrix, they extract features like We are using two datasets in our work: one called
contrast, energy, dissimilarity, area second moment (ASM) early-crop-weed [6] and the other CottonWeedID15 [17].
and correlation. Distance, angle, and a number of levels were early-crop-weed datasets have two weeds (velvet leaf and

VOLUME 12, 2024 11869


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

black night shade) and two crops (cotton and tomato),


whereas CottonWeedID15 has fifteen weeds (carpetweeds,
crabgrass, eclipta, goosegrass, morningglory, nutsedge,
palmeramaranth, prickly sida, purslane, ragweed, sicklepod,
spotted spurge, spurred anoda, swinecress, and water-
hemp).Fig 3 and 4 shows both datasets are unbalanced.

FIGURE 5. Annotation of purslane class image from cottonweed dataset.

3) REMOVE BACKGROUND USING U2-NET


FIGURE 3. Early-crop-weed dataset class imbalance. The U2-Net model is a deep learning model developed by
researchers from Hefei University of Technology in China.
It consists of 23 layers and is an improved version of the U-
Net model. The U2-Net architecture is specifically designed
to capture multi-scale contextual information and accurately
detect salient objects in images. In U2-Net, the image is
passed through an encoder. The encoder includes multiple
convolutional layers that extract features and reduce the
image dimensions. The encoded features are then passed
to the decoder layer, which consists of upsample layers.
These layers gradually increase the spatial dimensions while
preserving the learned features.U2-Net also utilizes skip
connections between the encoder and decoder, which help
in achieving accurate image segmentation. The output of
the decoder is a saliency map, which is a binary mask. The
FIGURE 4. CottonWeedID15 dataset class imbalance.
saliency map helps in segmenting the image by highlighting
the regions of interest. Fig 6 shows background removed
B. PREPROCESSING through U2-Net.
1) ANNOTATION
For YOLO training, an annotated dataset is required;
therefore, we annotated using LabelImg. We concentrated
on creating bounding boxes only around the areas where
weed’s leaves are present. We made sure to maximize the
weed’s leaf coverage while minimizing the soil area. The
objective of our bounding box was to minimize the inclusion
of soil and maximize the area covered by weed’s leaves.This
approach also helps in reducing land pollution since the spray
FIGURE 6. Background removed through U2-Net.
was intended for leaves and not for the soil. Fig 5 shows
annotation of purslane class image.
4) SMOTE SYNTHETIC MINORITY OVER-SAMPLING
2) GRAYSCALE AND RESIZE TECHNIQUE
To reduce computation, we converted the color images to SMOTE is a data upsampling technique that is helpful in
grayscale. This allowed us to process only one channel image. addressing class imbalances. SMOTE aims to create synthetic
Since the dataset images had large dimensions, we resized samples that lie along the line segment connecting the original
them to 224 by 224. By reducing the image size, we were minority class sample and its nearest neighbor. SMOTE
able to significantly reduce the computation time required for selects a random sample image from the minority class and
further processing. examines the most closely similar images using k nearest

11870 VOLUME 12, 2024


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

v
neighbors. Then, using interpolation, SMOTE generates a
u n
u 1 X
new image, and this process continues until all classes are =s=t (xi − x̄)2
n−1
not balanced. i=1
n
C. MANUAL FEATURES 1 X
Variance = s2 = (xi − x̄)2
In our first experiment which named as manual features n−1
i=1
with background and without background, we used the Mean absolute deviation
pretrained U2Net to remove the background from the images. n
This process helped isolate the interest area in the images 1X
= MAD = |xi − x̄|
Next, we converted the color images to grayscale. For n
i=1
reducing computational complexity, we resized the images to Mean squared deviation
a dimension of 224 pixels. For feature extraction, we utilized n
the Grey Level Co-occurrence Matrix (GLCM) approach. 1X
= MSD = (xi − x̄)2
We calculated the GLCM using different angles (0◦ , 90◦ , 45◦ , n
i=1
and 135◦ ) and considered neighboring pixel distances of 1, 3, 1 Pn
(xi − x̄)3
and 5. These settings allowed us to capture various texture skew =  n i=1 3
patterns and spatial relationships in the images.From the 1 Pn
(x i − x̄) 2 2
n i=1
GLCMs, we derived several texture features including energy,
1 Pn
correlation, dissimilarity, homogeneity, contrast, and entropy. (xi − x̄)4
kurt =  n i=1 2 − 3
And the summation of the local binary uniform pattern his- 1 Pn 2
n i=1 (x i − x̄)
togram from local binary pattern. Additionally, we computed
statistical features such as mean, standard deviation, vari- n
X
ance, mean absolute deviation, contrast, skewness, kurtosis, Entropy = H = − P(xi ) log2 P(xi )
entropy, and image moments. Furthermore, we calculated i=1
seven Hu moments [29]. Which are invariant image moments Summation of local binary pattern histogram
representing shape and geometric properties.To account for X n
edges and finer details in the images, we performed the = hi
calculation of image moments and Hu moments twice. The i=1
XX
first calculation was carried out on the original grayscale Image moment = mp,q = x p yq · I (x, y)
images, while for the second calculation, we applied the x y
Prewitt filter to extract edge gradient before computing Central moment = µp,q =
XX
(x − x̄)p (y− ȳ)q · I (x, y)
the image moments and Hu moments. We performed this x y
experiment without background removal too for knowing
Prewitt Filter in xy direction :
the importance of background.Normalization is applied to   
these features before being fed into classifiers. We utilized −1 0 1 −1 −1 −1
the Synthetic Minority Oversampling Technique (SMOTE) −1 0 1  0 0 0
to address class imbalance in the dataset. −1 0 1 1 1 1
s
 2  2
N X
X N
∂I ∂I
Energy = GLCM(i, j)2 Edge Gradient = +
∂x ∂y
i=1 j=1
N X
N
Hue moment1 = ἦταe20 + ἦτα02 ,
(i − µ)(j − µ)GLCM(i, j)
Hue moment2 = (ἦτα20 − ἦτα02 )2 + 4ἦτα211 ,
X
Correlation =
σ2
i=1 j=1 Hue moment3 = (ἦτα30 − 3ἦτα12 )2 + (3ἦτα21 − ἦτα03 )2 ,
N X
N
X Hue moment4 = (ἦτα30 + ἦτα12 )2 + (ἦτα21 + ἦτα03 )2 ,
Disimilarity = |i − j|GLCM(i, j)
i=1 j=1 Hue moment5 = (ἦτα30 − 3ἦτα12 )(ἦτα30 + ἦτα12 )
N X
X N
GLCM(i, j) × [(ἦτα30 +ἦτα12 )2 −3(ἦτα21 +ἦτα03 )2 ]
Homogeneity = + (3ἦτα21 − ἦτα03 )(ἦτα21 + ἦτα03 )
1 + |i − j|
i=1 j=1
× [3(ἦτα30 +ἦτα12 )2 −(ἦτα21 +ἦτα03 )2 ],
N X
N
Hue moment6 = (ἦτα20 − ἦτα02 )[(ἦτα30 + ἦτα12 )2
X
Contrast = (i − j)2 GLCM(i, j)
i=1 j=1 − (ἦτα21 + ἦτα03 )2 ] + 4ἦτα11
n
1 X × (ἦτα30 + ἦτα12 )(ἦτα21 + ἦτα03 ),
Mean = x̄ = xi
n Hue moment7 = (3ἦτα21 − ἦτα03 )(ἦτα30 + ἦτα12 )
i=1
Standard deviation × [(ἦτα30 +ἦτα12 )2 −3(ἦτα21 +ἦτα03 )2 ]

VOLUME 12, 2024 11871


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

− (ἦτα30 − 3ἦτα12 )(ἦτα21 + ἦτα03 ) validation and 15%was for testing. These ratios were used to
2
× [3(ἦτα30 +ἦτα12 ) −(ἦτα21 +ἦτα03 ) ]. 2 train and evaluate our classifiers: SVM, random forest, and
ANN. Among these classifiers, the ANN (Artificial Neural
where: Network) showed superior performance compared to SVM
• ἦταij denotes the central moment of order (i, j), and random forest. The testing accuracy achieved by the
• The central moments are calculated using the formulas Artificial Neural Network was 89.26 on CottonWeedID15
mentioned earlier for central moments. dataset, and on early-crop-weed dataset SVM showed
superior performance compared to ANN and random forest.
D. DEEP LEARNING FEATURES The testing accuracy achieved by the SVM with polynomial
Deep learning features from images are extracted through kernel was 99 on early-crop weed dataset. We utilized
CNNs (Convolutional Neural Networks). The early layers Autokeras for the artificial neural network. Autokeras tests
of CNNs extract features like corners, textures, and edges. 100 different architectures and selects the best architecture
Deeper layers of CNNs extract higher-level features, such based on validation accuracy.Fig 13 shows the validation
as shapes, objects, and semantic representations. To address and trainning loss. The SVM model’s optimal parameters
class imbalance, we applied synthetic minority oversampling were determined using grid search. After evaluating various
technique (SMOTE). After balancing the classes, we used options for C, gamma, and degree, the grid search identified
transfer learning and utilized the ImageNet weights with the best combination as C = 0.1, degree = 3, gamma = 0.4,
some famous cnns models for feature extraction. Fig 7 and 8 and kernel = ‘poly’. These parameters were chosen from a
show the results of applying SMOTE on the early-crop-weed range of possibilities: C values included 0.1, 0.2, 0.3, 0.5, 0.6,
and CottonWeedID15 datasets, respectively. 0.7, 0.8, 0.9, 1, 10, and 100; gamma values included 0.1, 0.2,
0.3, 0.5, 0.6, 0.7, 0.8, and 0.9; and degree values included 1,
2, 3, 4, 5, and 6. In random forest, we start training with
2 trees and incrementally add 1 tree until we reach a total of
300 trees. Then, we select the tree that achieves the highest
validation score and train the random forest using that specific
number of trees. Table 1 shows the experiment and results
on early-crop-weed dataset of manual features with Artificial
neural network classifiers. Table 2 shows the experiment
and results on CottonWeedID15 datset of manual features
with Artificial neural network classifiers. Fig 9 shows the
comparative analysis of classifiers on both dataset. Table 3
shows the experiment and results on early-crop-weed dataset
of manual features with SVM and Rndom forest classifiers.
FIGURE 7. Class imbalance removed from the CottonWeedID15 dataset TABLE 1. Experiment and results on early-crop-weed dataset of manual
using SMOTE. features with Artificial neural network classifiers.

FIGURE 8. Class imbalance removed from the early-crop-weed dataset


using SMOTE.
FIGURE 9. Experiment and results on the CottonWeedID15 dataset and
early-crop-weed dataset of manual features.
IV. EXPERIMENTS AND RESULTS
1) MANUAL FEATURES AND CLASSIFIERS 2) DEEP LEARNING FEATURES AND CLASSIFIER
After extracting the manual features, we divided them into Various CNN architectures, including VGG16, VGG19,
three ratios. The training dataset ratio was 65%, 20% was for Xception, DenseNet-121, DenseNet-169, DenseNet-210, and

11872 VOLUME 12, 2024


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

TABLE 2. Experiment and results on CottonWeedID15 dataset of manual


features with ANN.

FIGURE 10. Analysis of CottonWeedID15 dataset results.

ConvNeXt, were utilized for automated feature extraction.


These architectures were initialized with pre-trained weights
from the ImageNet dataset. Following the extraction of
automated features, a random forest algorithm was employed
for classification. Notably, the features obtained from the FIGURE 11. Analysis of early-crop-weed dataset results.
ConvNeXt architecture outperformed both manual features
and features extracted from other CNNs when used with
random forest. On the early-crop weed dataset, the random
forest model with ConvNeXt achieved a testing accuracy of
98%, while on the CottonWeedID15 dataset, the accuracy
reached 89%. In Fig 10 and 11, a comparative analysis is
presented, examining the performance of automated features
across various architectures when combined with the random
forest. table 4 and 5 shows experiments and results on
erarly crop weed and segmenter early-crop-weed dataset of
deep learning features with random forest classifiers.Table 6
and 7 shows experiments and results on CottonWeedID15
and segmented CottonWeedID15 dataset of deep learning FIGURE 12. YOLOv8 training and validation loss.
features with random forest classifiers. In random forest,
we start training with 30 trees and incrementally add 30 tree layers, YOLO v8 processes features at different scales.
until we reach a total of 300 trees. Then, we select the tree The Upsample layers enhanced the feature resolutions. For
that achieves the highest validation score and train the random enhacing detection accuracy, YOLO v8 used C2f module
forest on cotton weed id 15 dataset using that specific number for the integration of contextual information and feature.The
of trees. And on early-crop-weed dataset, we start training Detection module utilizes convolution layers and linear
with 10 trees and incrementally add 10 tree until we reach a layers for bounding boxes and object classes. We utilized
total of 300 trees. Then, we select the tree that achieves the YOLOv8-M for object detection. We initialize the learning
highest validation score and train the random forest on early- rate to 0.01 and used stochastic gradient optimizer with a
crop-weed dataset using that specific number of trees. batch size of 1. Additionally, we initialize weight decay
with 0.0005 to prevent the model from becoming overly
3) YOLO V8 complex and momentum with 0.9.After 100 epochs, utilizing
YOLOv8, belongs to You Only Look Once (YOLO) family, YOLOv8-M, achieved an overall mean average precision
represents a real-time object detection algorithm that has of 89.Table 8 shows analysis of YOLOv8 results. Fig 12
demonstrated substantial advancements compared to its shows YOLOv8 validation and training loss. Fig 14 shows
previous versions. YOLOv8 has a Backbone consists of the YOLOv8 confussion matrix on cotton weedid 15 dataset.
convolutional layers that extract features from the input Fig 15 shows precision recall curve. Fig 16 shows detetection
image. YOLO v8 also utilizes SPPF layer and convolution of weed.

VOLUME 12, 2024 11873


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

TABLE 3. Experiment and results on early-crop-weed dataset of manual features with SVM and Random forest classifiers.

TABLE 4. Experiment and results on early-crop-weed dataset of deep learning features with Random forest classifiers.

TABLE 5. Experiment and results on segmented early-crop-weed dataset of deep learning features with Rndom forest classifiers.

TABLE 6. Experiment and results on CottonWeedID15 dataset of deep learning features with Random forest classifiers.

11874 VOLUME 12, 2024


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

TABLE 7. Experiment and results on Segmented CottonWeedID15 dataset of deep learning features with Random forest classifiers.

TABLE 8. Analysis of YOLO v8 reults.

V. DISCUSSION
Do deep learning features yield more accurate results FIGURE 13. Training and validation loss of artificial neural network.
compared to hand-extracted features? Yes, deep learning
features indeed produce more accurate results than hand-
extracted features, as demonstrated by our experiments. to improved generalization and better performance on
Despite the utilization of Prewitt filters for edge and fine previously underrepresented weed instances. With the
detail enhancement, ConvNext still outperformed manual implementation of SMOTE, we were able to achieve 89%
features in terms of accuracy. But with deep learning, accuracy on the cotton weed ID 15 dataset and 99% accuracy
we require high computation compared to using manual on the early-crop-weed dataset. Both of these datasets were
features with classifiers. unbalanced, and SMOTE played a crucial role in attaining
Is the synthetic minority oversampling technique (SMOTE) such high accuracy.
effective for weed classification problems? Yes, the synthetic Does YOLOv8 perform well in agricultural problems?
minority oversampling technique (SMOTE) proves to be Yes, YOLOv8 is an extremely powerful state-of-the-
effective for weed classification problems, especially when art object detection model. The model’s ability to handle
dealing with unbalanced datasets. The use of Synthetic complex scenes and diverse weed types is a significant
Minority Over-sampling Technique (SMOTE) addressed advantage, showcasing its suitability for agricultural appli-
the class imbalance inherent in weed detection datasets. cations. It performed exceptionally well in agricultural
By generating synthetic samples for the minority class, problems. The implementation of YOLO v8 for weed
the training set became balanced, preventing the model detection yielded promising results, achieving an overall
from being biased toward the majority class. This led mean average precision (mAP) of 89.

VOLUME 12, 2024 11875


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

Algorithms. As observed, deep learning models are produc-


ing good results, but they still need improvement. To enhance
deep learning models performance, [32] introduced marginal
deep architectures, incorporating marginal Fisher analysis
and introducing stacked feature learning modules. Their
results show improvement, particularly in classification, and
speech recognition problems.
Agricultural countries, with their predominantly agrarian
economies, play a crucial role in the global agricultural
landscape. The application of advanced technologies, such as
deep learning, in the agricultural sector can have profound
implications for improving productivity, sustainability, and
crop yield. The integration of deep learning models, such as
YOLOv8, into agricultural practices holds the potential to
address challenges related to crop monitoring, pest control,
and resource optimization.
FIGURE 14. YOLO v8 confussion matrix on CottonWeedID15 dataset. To further advance weed detection in crops, future research
could focus on refining the model’s robustness to environ-
mental factors and expanding the dataset to encompass a
broader range of agricultural scenarios. Investigating transfer
learning techniques and exploring the use of multi-sensor data
for more accurate weed identification are potential avenues
for improvement.Current research predominantly relies on
RGB images for weed detection. Exploring the integration of
multispectral data, such as infrared or hyperspectral imagery,
could provide additional insights into weed characteristics
and improve the model’s accuracy, particularly in scenarios
where visual cues alone may be insufficient.

VI. CONCLUSION
Agriculture is facing weed challenges, and automated weed
control systems can assist farmers in crop production while
FIGURE 15. YOLO v8 Precision recall curve on CottonWeedID15 dataset.
also lowering production costs. A large image dataset is
required for future work to meet the challenges of real
time in agriculture. Moreover, deep convolutional generative
adversarial networks should utilized for crop and weed
augmentation. This approach allows us to enhance the
agricultural dataset. Deep learning architectures produced
great results, but room for improvement still exists. Deep
learning algorithms are becoming a new step in improving
crop yield and getting rid of these weeds more efficiently.
People in agricultural countries are continuing to use
traditional approaches due to lack of awareness of deep
FIGURE 16. Detection of weed using YOLO v8.
learning technologies. However, they should be directed
toward modern technology to improve agriculture and crop
IoT devices find application in agriculture, yielding yield. The key contributions of this research lie in advocating
positive outcomes. For secure operation of IoT devices in for the utilization of advanced technologies, particularly deep
agriculture, an Intrusion Detection System (IDS) is essen- learning, to overcome traditional agricultural constraints.
tial [30]. The authors explored Machine Learning (ML) and This shift has the potential to significantly impact the
Deep Learning (DL) techniques to enhance cybersecurity and agricultural community by fostering increased awareness and
prevent potential threats. Optimization algorithms enhance adoption of modern techniques in the world, thereby elevating
the learning capabilities of mathematical models [31]. In their agricultural practices and crop yields. The study aims to
work, the authors proposed an improved Chicken Swarm catalyze a transformative impact within the community
Intelligence (CSI) to optimize Support Vector Machine of practice and the relevant industry by promoting the
(SVM) learning parameters. They compared their proposed integration of state-of-the-art technologies for sustainable and
CSI with Particle Swarm Optimization (PSO) and Bat efficient weed management in agriculture.

11876 VOLUME 12, 2024


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

DECLARATIONS [16] A. Ahmad, D. Saraswat, V. Aggarwal, A. Etienne, and B. Hancock,


Data Availability: The dataset used in this research [8] is ‘‘Performance of deep learning models for classifying and detecting
common weeds in corn and soybean production systems,’’ Comput.
freely available. Electron. Agricult., vol. 184, May 2021, Art. no. 106081.
Conflict of Interest: It is declared by the authors that there [17] D. Chen, Y. Lu, Z. Li, and S. Young, ‘‘Performance evaluation of deep
transfer learning on multi-class identification of common weed species
is no conflict of interest.
in cotton production systems,’’ Comput. Electron. Agricult., vol. 198,
Code availability Code can be provided on request. Jul. 2022, Art. no. 107091.
Authors’ contributions Syed Mujtaba Hassan Rizvi [18] Q. Wang, M. Cheng, S. Huang, Z. Cai, J. Zhang, and H. Yuan,
‘‘A deep learning approach incorporating YOLO v5 and attention mecha-
worked on algorithm implementation and wrote the initial nisms for field real-time detection of the invasive weed solanum rostratum
draft of the article. Asma Naseer floated the idea, supervised dunal seedlings,’’ Comput. Electron. Agricult., vol. 199, Aug. 2022,
the implementation, contributed to the article write-up, Art. no. 107194.
[19] A. Subeesh, S. Bhole, K. Singh, N. S. Chandel, Y. A. Rajwade,
and helped in implementation. Shafiq Ur Rehmanverified and K. V. R. Rao, S. P. Kumar, and D. Jat, ‘‘Deep convolutional neural network
analyzed the results and contributed to the article write-up. models for weed detection in polyhouse grown bell peppers,’’ Artif. Intell.
Sheeraz Akram improved the algorithm and the article write- Agricult., vol. 6, pp. 47–54, Jan. 2022.
[20] N. Razfar, J. True, R. Bassiouny, V. Venkatesh, and R. Kashef, ‘‘Weed
up. Volker Gruhn funded the project and supervised it. detection in soybean crops using custom lightweight deep learning
models,’’ J. Agricult. Food Res., vol. 8, Jun. 2022, Art. no. 100308.
ACKNOWLEDGMENT [21] J. Yang, M. Bagavathiannan, Y. Wang, Y. Chen, and J. Yu, ‘‘A comparative
This work was supported and funded by the Deanship of evaluation of convolutional neural networks, training image sizes, and deep
learning optimizers for weed detection in alfalfa,’’ Weed Technol., vol. 36,
Scientific Research at Imam Mohammad Ibn Saud Islamic no. 4, pp. 512–522, Aug. 2022.
University (IMSIU) (grant number IMSIU-RP23042). [22] P. J. Hennessy, T. J. Esau, A. W. Schumann, A. A. Farooque, Q. U. Zaman,
and S. N. White, ‘‘Meta deep learning using minimal training images for
REFERENCES weed classification in wild blueberry,’’ Tech. Rep., 2022.
[23] X. Jin, M. Bagavathiannan, A. Maity, Y. Chen, and J. Yu, ‘‘Deep
[1] P. Radoglou-Grammatikis, P. Sarigiannidis, T. Lagkas, and I. Moscholios, learning for detecting herbicide weed control spectrum in turfgrass,’’ Plant
‘‘A compilation of UAV applications for precision agriculture,’’ Comput. Methods, vol. 18, no. 1, pp. 1–11, Dec. 2022.
Netw., vol. 172, May 2020, Art. no. 107148. [24] J.-L. Zhang, W.-H. Su, H.-Y. Zhang, and Y. Peng, ‘‘SE-YOLOv5x: An opti-
[2] R. Lal, ‘‘Soil structure and sustainability,’’ J. Sustain. Agricult., vol. 1, mized model based on transfer learning and visual attention mechanism
no. 4, pp. 67–92, 1991. for identifying and localizing weeds and vegetables,’’ Agronomy, vol. 12,
[3] T. Ashraf and Y. N. Khan, ‘‘Weed density classification in rice crop
no. 9, p. 2061, Aug. 2022.
using computer vision,’’ Comput. Electron. Agricult., vol. 175, Aug. 2020, [25] R. Reedha, E. Dericquebourg, R. Canals, and A. Hafiane, ‘‘Transformer
Art. no. 105590. neural network for weed and crop classification of high resolution UAV
[4] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár,
images,’’ Remote Sens., vol. 14, no. 3, p. 592, Jan. 2022.
and C. L. Zitnick, ‘‘Microsoft COCO: Common objects in context,’’ [26] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2014, S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial nets,’’ Stat,
pp. 740–755. vol. 1050, p. 10, Jan. 2014.
[5] S. K. Seelan, S. Laguette, G. M. Casady, and G. A. Seielstad, ‘‘Remote [27] Y. Lu, D. Chen, E. Olaniyi, and Y. Huang, ‘‘Generative adversarial
sensing applications for precision agriculture: A learning community networks (GANs) for image augmentation in agriculture: A systematic
approach,’’ Remote Sens. Environ., vol. 88, nos. 1–2, pp. 157–169, review,’’ Comput. Electron. Agricult., vol. 200, Sep. 2022, Art. no. 107208.
Nov. 2003. [28] Y. Zhang, C. Koparan, M. R. Ahmed, K. Howatt, and X. Sun, ‘‘Weed
[6] B. Espejo-Garcia, N. Mylonas, L. Athanasakos, S. Fountas, and I. Vasi- and crop species classification using computer vision and deep learning
lakoglou, ‘‘Towards weeds identification assistance through transfer learn- technologies in greenhouse conditions,’’ J. Agricult. Food Res., vol. 9,
ing,’’ Comput. Electron. Agricult., vol. 171, Apr. 2020, Art. no. 105306. Sep. 2022, Art. no. 100325.
[7] T. M. Giselsson, R. N. Jørgensen, P. K. Jensen, M. Dyrmann, and [29] M.-K. Hu, ‘‘Visual pattern recognition by moment invariants,’’ IEEE
H. S. Midtiby, ‘‘A public image database for benchmark of plant seedling Trans. Inf. Theory, vol. IT-8, no. 2, pp. 179–187, Feb. 1962.
classification algorithms,’’ 2017, arXiv:1711.05458. [30] A. Halbouni, T. S. Gunawan, M. H. Habaebi, M. Halbouni, M. Kartiwi,
[8] S. Mujtaba. (2023). Annotated Weed Images. [Online]. Available: and R. Ahmad, ‘‘Machine learning and deep learning approaches for
https://www.kaggle.com/datasets/mujtabatszh/roi-yolov8-cwid15-by- CyberSecurity: A review,’’ IEEE Access, vol. 10, pp. 19572–19585, 2022.
syed-mujtaba-hassan-rizvi [31] J. Liu, J. Feng, and X. Gao, ‘‘Fault diagnosis of rod pumping wells
[9] B. Espejo-Garcia, N. Mylonas, L. Athanasakos, E. Vali, and S. Fountas, based on support vector machine optimized by improved chicken swarm
‘‘Combining generative adversarial networks and agricultural transfer optimization,’’ IEEE Access, vol. 7, pp. 171598–171608, 2019.
learning for weeds identification,’’ Biosyst. Eng., vol. 204, pp. 79–89, [32] G. Zhong, K. Zhang, H. Wei, Y. Zheng, and J. Dong, ‘‘Marginal deep
Jan. 2021. architecture: Stacking feature learning modules to build deep learning
[10] A. Wang, W. Zhang, and X. Wei, ‘‘A review on weed detection using models,’’ IEEE Access, vol. 7, pp. 30220–30233, 2019.
ground-based machine vision and image processing techniques,’’ Comput.
Electron. Agricult., vol. 158, pp. 226–240, Mar. 2019.
[11] J. Yu, S. M. Sharpe, A. W. Schumann, and N. S. Boyd, ‘‘Deep learning
for image-based weed detection in turfgrass,’’ Eur. J. Agronomy, vol. 104, SYED MUJTABA HASSAN RIZVI received the
pp. 78–84, Mar. 2019. master’s degree in computer science from the
[12] A. S. M. M. Hasan, F. Sohel, D. Diepeveen, H. Laga, and M. G. K. Jones, National University of Computer and Emerging
‘‘A survey of deep learning techniques for weed detection from images,’’ Science (NUCES), Lahore, Pakistan, in 2023.
Comput. Electron. Agricult., vol. 184, May 2021, Art. no. 106067. He is currently an Instructor with Fast NUCES,
[13] A. Bakhshipour and A. Jafari, ‘‘Evaluation of support vector machine and where he teaches artificial intelligence to B.S.
artificial neural networks in weed detection using shape features,’’ Comput. students. He is passionate about teaching and
Electron. Agricult., vol. 145, pp. 153–160, Feb. 2018. mentoring students, and he is always looking for
[14] A. Kaya, A. S. Keceli, C. Catal, H. Y. Yalic, H. Temucin, and B.
new ways to make learning more engaging and
Tekinerdogan, ‘‘Analysis of transfer learning for deep neural network
based plant classification models,’’ Comput. Electron. Agricult., vol. 158, effective. He is a highly motivated and talented
pp. 20–29, Mar. 2019. young researcher with a bright future ahead of him. He is committed to using
[15] J. Zhuang, X. Li, M. Bagavathiannan, X. Jin, J. Yang, W. Meng, T. Li, L. Li, his skills and knowledge to make a positive impact on the world. He is also
Y. Wang, Y. Chen, and J. Yu, ‘‘Evaluation of different deep convolutional looking for fully funded scholarships to pursue the Ph.D. degree in computer
neural networks for detection of broadleaf weed seedlings in wheat,’’ Pest science. He is eager to contribute to the field of artificial intelligence. His
Manage. Sci., vol. 78, no. 2, pp. 521–529, Feb. 2022. research interests include deep learning and image processing.

VOLUME 12, 2024 11877


S. M. H. Rizvi et al.: Revolutionizing Agriculture: Machine and Deep Learning Solutions

ASMA NASEER received the M.S. and Ph.D. SHEERAZ AKRAM received the M.Sc. degree in
degrees in computer science from the National computer science from the Lahore University of
University of Computer and Emerging Science Management Sciences (LUMS), Lahore, Pakistan,
(NUCES), Lahore, Pakistan, in 2008 and 2019, and the Ph.D. degree in software engineering
respectively. She is currently an Associate Profes- from the National University of Sciences and
sor with NUCES. Prior to joining NUCES, she Technology (NUST), Islamabad, Pakistan. He is
was a Faculty Member with the Department of currently with the Department of Information
Computer Science, University of Management and Systems, College of Computer and Information
Technology (UMT), from 2010 to 2021. She is a Sciences, Imam Mohammad Ibn Saud Islamic
dedicated Artificial Intelligence (AI) and Machine University, Riyadh, Saudi Arabia. He is also
Learning (ML) Expert, with almost 15 years of experience. She has been associated with the Department of Computer Science, Faculty of Computer
awarded full scholarships from the Higher Education Commission (HEC), Science and Information Technology, Superior University, Lahore. He is a
Pakistan, and other local and foreign bodies for the postgraduate and Ph.D. Coordinator and a Senior Member of the Intelligent Data Visual Computing
studies. Research (IDVCR). He completed the postdoctoral research training with
the University of Pittsburgh, USA, and worked on a project. He has
17 years of working experience at universities, which includes three years of
international research experience. His research interests include data science,
medical image processing, artificial intelligence in data science, machine
learning, deep learning, computer vision, and digital image processing.

SHAFIQ UR REHMAN received the M.S. degree


in computer science from the Dresden University
of Technology, Dresden, Germany, and the Ph.D. VOLKER GRUHN received the M.S. and Ph.D.
degree in computer science from the Department degrees in computer science from the Technical
of Software Engineering, Universität Duisburg- University of Dortmund, Germany, in 1987 and
Essen, Germany, in 2020. He was a Consultant 1991, respectively. Then, he was with the Fraun-
(Requirements Engineer) in well-renowned inter- hofer Institute for Software and Systems Engineer-
national organizations in Germany. He is currently ing. In 1997, he co-founded adesso AG, where
an Assistant Professor with the College of Com- he is the Supervisory Board Chairperson. Since
puter and Information Sciences, Imam Moham- 2010, he has been the Chair of Software Engineer-
mad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia. He has ing with Universität Duisburg–Essen, Germany.
published several research papers in high-ranked international conferences He has published more than 300 research papers
and ISI-indexed journals. He is involved in different international-funded in ISI-indexed journals and high-ranked international conferences. Also,
projects in the field of cyber-physical systems and cybersecurity. His research he supervised several Ph.D. students. His research interests include industrial
interests include AI, cyber-physical systems, cybersecurity, and requirements software engineering and the effects of digital transformation on enterprises.
engineering.

11878 VOLUME 12, 2024

You might also like