2024 - Enhanced Corn Seed Disease Classification - Leveraging MobileNetV2 With Feature Augmentation and Transfer Learning
2024 - Enhanced Corn Seed Disease Classification - Leveraging MobileNetV2 With Feature Augmentation and Transfer Learning
REVIEWED BY
Neha Gupta,
augmentation and transfer
Amity University, India
Surjeet Dalal,
Amity University Gurgaon, India
learning
*CORRESPONDENCE
Yonis Gulzar Mohannad Alkanan and Yonis Gulzar*
[email protected]
RECEIVED 11 October 2023 Department of Management Information Systems, College of Business Administration, King Faisal
ACCEPTED 07 December 2023 University, Al Ahsa, Saudi Arabia
PUBLISHED 03 January 2024
CITATION
Alkanan M and Gulzar Y (2024) Enhanced corn In the era of advancing artificial intelligence (AI), its application in agriculture
seed disease classification: leveraging has become increasingly pivotal. This study explores the integration of AI
MobileNetV2 with feature augmentation and
transfer learning.
for the discriminative classification of corn diseases, addressing the need for
Front. Appl. Math. Stat. 9:1320177. efficient agricultural practices. Leveraging a comprehensive dataset, the study
doi: 10.3389/fams.2023.1320177 encompasses 21,662 images categorized into four classes: Broken, Discolored,
COPYRIGHT Silk cut, and Pure. The proposed model, an enhanced iteration of MobileNetV2,
© 2024 Alkanan and Gulzar. This is an strategically incorporates additional layers—Average Pooling, Flatten, Dense,
open-access article distributed under the terms
of the Creative Commons Attribution License Dropout, and softmax—augmenting its feature extraction capabilities. Model
(CC BY). The use, distribution or reproduction tuning techniques, including data augmentation, adaptive learning rate, model
in other forums is permitted, provided the checkpointing, dropout, and transfer learning, fortify the model’s efficiency.
original author(s) and the copyright owner(s)
are credited and that the original publication in Results showcase the proposed model’s exceptional performance, achieving an
this journal is cited, in accordance with accuracy of ∼96% across the four classes. Precision, recall, and F1-score metrics
accepted academic practice. No use, underscore the model’s proficiency, with precision values ranging from 0.949 to
distribution or reproduction is permitted which
does not comply with these terms. 0.975 and recall values from 0.957 to 0.963. In a comparative analysis with state-
of-the-art (SOTA) models, the proposed model outshines counterparts in terms of
precision, recall, F1-score, and accuracy. Notably, MobileNetV2, the base model
for the proposed architecture, achieves the highest values, affirming its superiority
in accurately classifying instances within the corn disease dataset. This study not
only contributes to the growing body of AI applications in agriculture but also
presents a novel and effective model for corn disease classification. The proposed
model’s robust performance, combined with its competitive edge against SOTA
models, positions it as a promising solution for advancing precision agriculture
and crop management.
KEYWORDS
deep learning, corn, precision agriculture, image classification, corn seed, corn seed
disease
1 Introduction
Evaluating the quality of agricultural products has long been a significant concern for
various countries. The quality assessment of these products holds immense importance
as it directly impacts various aspects of the agricultural industry and food supply chain
[1]. In recent years, the emergence of precision agriculture has brought about stricter
requirements and advanced techniques for assessing the quality of agricultural products.
Precision agriculture utilizes innovative technologies such as remote sensing, drones, image
classification and data analytics to gather detailed information about crops and their growing
conditions. These advancements have enabled more precise and accurate evaluation of
agricultural product quality [2].
The assessment of agricultural product quality is crucial for e-commerce [21], and agriculture [22, 23] and other domains
several reasons. Firstly, it ensures accurate identification and [24, 25]. Deep learning holds great promise in revolutionizing
effective control of seed pests and diseases. By implementing disease identification and management in corn crops. With its
stringent quality assessment practices, farmers and agricultural ability to efficiently analyze large volumes of data and extract
professionals can identify potential issues early on and take intricate patterns, deep learning models can assist in the early
necessary measures to mitigate the spread of pests and diseases, detection of diseases, enabling timely intervention and improved
safeguarding crop health and productivity [2, 3]. crop health.
Secondly, the quality assessment of agricultural products plays From the literature, it is evident that many researchers have
a vital role in grain storage and distribution management. Precise incorporated deep learning in agriculture. Tian et al. [26] have
evaluation helps in preserving seed quality, ensuring that only proposed a deep learning model using wavelet threshold method.
high-quality seeds are stored and distributed. This contributes The proposed model is trained on 6 different classes on corn
to maintaining crop diversity and promoting better yields in diseases and has achieved 96.8% accuracy. Mishra et al. [27]
subsequent seasons. Furthermore, evaluating agricultural product has proposed a CNN model to classify corn leaf diseases and
quality is essential in reducing food waste. Accurate assessment has achieved 88.46% accuracy. A deep learning model based
helps identify and separate products that meet the desired quality on VGG16 was proposed to classify 14 different types of seeds
standards, minimizing waste throughout the supply chain. This not [28]. The modified CNN model has incorporated many model-
only benefits economic efficiency but also addresses environmental tuning techniques such as transfer learning, model checkpointing
concerns associated with food waste. and data augmentation. With the help of these techniques the
Corn, also known as maize (Zea mays), is one of the most widely proposed model has achieved 99% accuracy. Yu et al. [29] have
cultivated cereal crops worldwide. It is a staple food for many performed a comparative study to examine the classification of
populations and plays a vital role in various industries, including three common corn diseases using different CNN models. they
agriculture, animal feed, and biofuel production. United States have tested VGG-16, ResNet18, Inception v3, VGG-19 based on
Department of Agriculture (USDA) has estimated that the corn a dataset containing three classes. These models have achieved
production for the year 2022/23 will be 1,161.86 million metric accuracy of 84.42%, 83.75%, 83.05% and 82.63% respectively.
tons worldwide. Whereas the corn production of last year was Ahmad et al. [30] have proposed a deep learning model for corn
1,216.87 million tons [4]. So, this year it can be predicted that disease identification. They have created a dataset using Unmanned
there will around 4.52% decrease in corn production worldwide. Aerial System (UAS) imagery and have collected 59,000 images
Fusarium graminearum, Fusarium cepacia, Fusarium proliferatum, over three different corn fields. The dataset contains three common
and Fusarium subglutinans are well-recognized pathogens that diseases found in corn. They claim that the model achieved
commonly contribute to the development of root, stalk, and cob rot 98.85% accuracy. In another study the authors have conducted
in maize [5]. The presence of diseased seeds serves as a significant a comparative study [31] in which they have compered state-
source of initial infestation, leading to plant diseases and facilitating of-the-art (SOTA) models based on a created dataset. VGG16,
the long-distance dissemination of such pathogens. This, in turn, ResNet50, InceptionV3, DenseNet169, and Xception were trained
adversely affects the germination rate of seeds [6]. Moreover, to identify the corn diseases. They claim that DenseNet169 has
infected seeds pose challenges for storage, as they can contaminate achieved 100% accuracy and has outperformed all other SOTA
other seeds, resulting in mold formation and substantial food losses. models. Albarrak et al. [32] proposed a modified deep learning
Furthermore, the compromised quality of these seeds renders them model based on MobileNetV2 for classifying eight different types
unsuitable for consumption [7]. of date fruit. They incorporated transfer learning and added new
Traditional approaches for assessing grain quality and layers to the based model to improve the accuracy. The modified
safety often involve laborious and time-consuming microbial model achieved 99% accuracy in identifying different types of
experiments, such as spore counting and enzyme-linked date fruits. Fraiwan et al. [33] proposed a deep learning model
immunosorbent assays. While these methods exhibit high for classification of three commonly diseases of corn leaf. They
accuracy in disease detection, their drawbacks include their have incorporated transfer learning and without using any other
time-consuming nature, labor-intensive requirements, and feature extraction technique to conduct their experiments. They
destructive nature [8]. Phenotypic seed detection, as a non- have achieved 98.6% accuracy while identifying corn leaf diseases.
destructive testing method, serves as a fundamental approach In other studies [34–36], the authors tried to classify different types
for evaluating seed quality. However, manual testing methods of fruit using deep learning models. In study [34], author classified
are subject to subjective factors, resulting in variations in test forty different types of fruits by proposing a deep learning model
results among different individuals and yielding low detection based on MobileNetV2 and has achieved 99% accuracy. Whereas
efficiency, thereby increasing the likelihood of misjudgment in study [35], authors have trained a deep learning model based on
[9, 10]. Consequently, quality inspectors urgently require a rapid YOLO architecture for classifying oil palm fruit and has achieved
and objective methodology to detect diseases in corn seeds. 98.7% accuracy.
Artificial Intelligence (AI), particularly deep learning, has Masood et al. [37] have proposed a MaizeNet deep learning
demonstrated remarkable capabilities in extracting features model for classifying maize leaf diseases. MaizeNet is based on
efficiently and accurately from complex data [11]. This ResNet50 and is trained on public dataset called corn disease.
transformative technology has found applications across various The proposed model has achieved 97.89% accuracy with mAP
domains [12], including healthcare [13–18], education [19, 20], value of 0.94. Ahmad et al. [38] have conducted a study in which
they have trained five SOTA models (InceptionV3, ResNet50, 2 Materials and methods
VGG16, DesneNet169, and Xception) on five datasets containing
the images of corn diseases. They claim that DenseNet169 2.1 Dataset description
has achieved highest accuracy of 81.60% among other models
based on all five datasets. Hatem et al. [39] have used a The dataset used in this study is the publicly available Corn
dataset found on Kaggle containing three common diseases Seeds Dataset [42] provided by a laboratory in Hyderabad, India.
of found in corn. They have used pretrained models such as This dataset encompasses a collection of 17,801 images of corn
GoogleNet, AlexNet, ResNet50 and VGG16 for experimentations seeds, which are classified into four distinct categories: pure,
and claim that these models achieved 98.57%, 98.81%, 99.05%, broken, discolored, and silkcut. Among the entire dataset, ∼40.8%
and 99.36% accuracy respectively. Divyanth et al. [40] conducted of the seeds are classified as healthy, while the remaining 59.2%
a study in which they classified different types of corn diseases. are categorized as diseased seeds. Further breakdown of the
Before classifying they used three models, SegNet, UNet, and diseased seeds reveals that 32% of them are broken, 17.4% are
DeepLabV3+ for segmentation. After segmentation they identified discolored, and 9.8% are silkcut. The Corn Seeds Dataset [42]
that UNet performed well during segmentation. Based on that they serves as a valuable resource for researchers and practitioners
developed two-stage deep learning model to classify commonly in the field of corn disease identification. It provides a diverse
diseases found in corn. Rajeena et al. [41] proposed a modified set of seed images, encompassing both healthy and diseased
version of EfficientNet model for classifying corn leaf diseases. samples, enabling the development and evaluation of deep learning
The authors claim that they have achieved 98.85% accuracy models specifically tailored for corn disease classification. The
during training. inclusion of various disease types, such as broken, discolored,
This study presents an innovative approach to corn seed and silkcut seeds, ensures the dataset’s representation of real-
disease classification in precision agriculture, leveraging world scenarios and enhances its utility in training accurate
advanced Deep Learning techniques. The primary focus is and robust classification models. Furthermore, the distribution
on optimizing the MobileNetV2 architecture for enhanced of healthy and diseased seeds within the dataset reflects the
accuracy in identifying distinct corn diseases, addressing the prevalence of these conditions. Figure 1 presents the sample of
challenges of limited dataset size and class imbalances through corn dataset. Upon scrutinizing the final dataset, the original
strategic model tuning. The contributions of the study are training set is composed of 6,972 images representing the pure
as follows: class, 5,489 images for the broken class, 2,748 images allocated
to the discolored class, and 1,569 images assigned to the Silkcut
• Tailored MobileNetV2 architecture: the study introduces a class. Acknowledging the inherent imbalance in the dataset,
modified MobileNetV2 architecture, incorporating additional a data augmentation approach is introduced, detailed in the
layers like Average Pooling, Flatten, Dense, Dropout, and subsequent section.
softmax. This tailored design optimally captures intricate
features relevant to corn diseases, enhancing the model’s
discriminative capabilities.
• Effective data augmentation: to overcome data limitations, the
research employs data augmentation techniques, generating 2.2 Model selection
diverse images from the original dataset. This process
significantly expands the dataset, contributing to improved In the realm of image processing, CNN has garnered
model training and robustness. increased attention due to its substantial economic potential
• Strategic model tuning techniques: the proposed work and consistently high accuracy. Recognized CNN architectures,
implements adaptive learning rate, model checkpointing, such as MobileNetv2 [43], EfficientNetV2S [44], VGG19 [45],
dropout, and transfer learning to fine-tune the model. These and ResNet50 [46], enjoy widespread popularity in image
techniques collectively contribute to preventing overfitting, processing and classification. The convolution operation(s) play
expediting training, and enhancing the model’s adaptability to a pivotal role in any computer vision task, albeit contributing
diverse patterns within the data. to heightened processing times and costs in larger, deeper
• Comprehensive performance analysis: the study provides networks like EfficeintNetV2S, VGG19, ResNet etc. In contrast,
a detailed analysis of the proposed model’s performance, MobileNetV2 distinguishes itself through an inverse residual
including accuracy, precision, recall, and F1-score across structure and linear bottleneck configuration, resulting in
various corn disease classes. The results showcase the model’s reduced convolution calculations. Its preference over other
efficiency in accurate classification and its ability to generalize architectures is attributed to its simplicity and memory-
well to unseen data. efficient characteristics. Table 1 outlines the precision, recall,
• Comparison with state-of-the-art models: the proposed and F1-score of state-of-the-art (SOTA) models such as
model’s performance is benchmarked against state-of-the- AlexNet, VGG16, InceptionV3, ResNet, and MobileNetV2.
art models (SOTA) and existing studies in the literature. It is crucial to emphasize that all models underwent training
The comparative analysis demonstrates the superiority of on the Corn dataset without the utilization of any pre-
the proposed MobileNetV2-based model, emphasizing its processing techniques. The classification layer was the sole
competitiveness and efficacy in the field of agricultural modification, adjusted based on the number of classes within
disease classification. the dataset.
FIGURE 1
Samples of dataset (A) broken, (B) discolored, (C) silkcut, and (D) pure.
Models Precision Recall F1-score Our primary goal is the discriminative classification of various
corn diseases. To achieve this objective, we have strategically
EfficeintNetV2S 0.66 0.64 0.65
selected the optimal model, MobileNetV2, based on initial
VGG19 0.64 0.65 0.65 screening. To further enhance the model’s efficiency, we have
ResNet50 0.67 0.66 0.67 incorporated additional layers preceding the classification layers.
These layers include (i) Average Pooling layer, (ii) Flatten layer, (iii)
MobileNetV2 0.74 0.76 0.74
Dense layer, (iv) Dropout layer, and (v) softmax. The addition of
these layers yields several benefits. The Average Pooling layer serves
to down-sample the spatial dimensions, reducing computational
In examining the performance metrics of SOTA models complexity while retaining essential features and it is set to
on the Corn Dataset, it is evident that MobileNetV2 emerges (7 × 7). The Flatten layer transforms the output from the
as the standout performer, achieving the highest accuracy preceding layers into a one-dimensional array, facilitating its
among the models. With a precision of 0.74, recall of input into the subsequent Dense layer. The introduction of a
0.76, and an F1-score of 0.74, MobileNetV2 showcases its Dense layer with the activation function set as Relu enhances
exceptional ability to accurately discern and classify features the model’s ability to capture complex patterns within the
within the corn dataset. The outstanding performance of data. Moreover, the incorporation of a Dropout layer with a
MobileNetV2 can be attributed to its efficient architecture, probability value of 0.5 helps prevent overfitting by randomly
featuring an inverse residual structure and a linear bottleneck deactivating a proportion of neurons during training, promoting
configuration. These design elements not only contribute to better generalization to unseen data. The subsequent addition
reduced computational requirements but also make MobileNetV2 of four nodes within the classification layer further refines
particularly well-suited for resource-constrained environments, the model’s ability to distinguish between different classes of
such as mobile devices. It is noteworthy that MobileNetV2’s corn diseases.
superior accuracy makes it an optimal choice as the base model Implementing these modifications has resulted in an enhanced
for further exploration and application in corn-related image version of the MobileNetV2 architecture, featuring four distinct
recognition tasks. In light of its high-performance metrics nodes in its final (classification) layer. This configuration proves to
and efficient design, MobileNetV2 has been selected as the be an optimal and well-suited model for addressing the specified
foundational model for this study, reflecting its capability to problem in this study, offering improved efficiency, robustness, and
handle the complexities of the Corn Dataset with precision a heightened capacity for accurate classification of diverse corn
and effectiveness. diseases. The proposed model is presented in Figure 2.
FIGURE 3
Augmented images.
FIGURE 4
Accuracy and loss of proposed model. (A) Training and (B) loss.
FIGURE 5
Confusion matrix of proposed model during testing.
iteration, with the model continuing to refine its understanding techniques, including data augmentation, transfer learning, and
of the data, culminating in a robust accuracy of ∼96% by the adaptive learning rate, ensuring the model’s adaptability to diverse
35th iteration. Remarkably, the accuracy remained stable from data patterns.
the 35th iteration onward, showcasing the model’s adeptness Figure 4B complements the accuracy visualization by
in sustaining its high performance throughout the training presenting the loss dynamics of the proposed model. In the initial
process. The validation accuracy closely mirrored the training stages, both training and validation losses were relatively high.
accuracy, commencing at 43% and steadily climbing to a parallel However, as the training progressed, a notable reduction in loss
96.27% by the 37th iteration. This parity between training and was observed, reaching a minimum at the 35th iteration. This
validation accuracy is a testament to the model’s generalization substantial decrease in loss underscores the model’s capacity
capability and its effectiveness in accurately classifying unseen to converge effectively, refining its predictive capabilities and
data. Crucially, the absence of oscillation in the training and minimizing errors. Importantly, the sustained low loss from the
validation curves signifies a lack of overfitting. This resilience 35th iteration onward signifies the model’s stability and robustness
is attributed to the thoughtful incorporation of various model in capturing the underlying patterns within the data.
performance. The use of adaptive learning rate, with an initial F1-score 0.965 0.955 0.953 0.968
rate set to INIT_LR = 0.001 and a decay calculated as decay =
Accuracy 0.963 0.959 0.967 0.962
INIT_LT/EPOCHS, expedites training while dynamically adjusting
the learning rate. This adaptability ensures efficient convergence
and alleviates the challenge of manually selecting an optimal
learning rate and schedule. instances from Discolored being incorrectly classified as Pure, and
The employment of model checkpointing further enhances 2.2% of instances from Pure being misclassified as Discolored. This
the model’s robustness. This technique establishes checkpoints misclassification is plausible due to the visual similarities between
during training, monitoring positive changes in accuracy. By saving the symptoms of Discolored and Pure diseases, potentially leading
the model’s weights when accuracy reaches an optimum level, to confusion for the model. The misclassification between Broken
the model checkpointing mechanism ensures that the trained and Silk cut, as well as Discolored and Pure, underscores the
model is preserved at its best-performing state, thereby preventing complexity of distinguishing between these classes, given the visual
overfitting and enhancing generalization to unseen data. The resemblances in certain instances. The model’s reliance on visual
dropout technique serves as a potent tool in addressing overfitting features may lead to misinterpretations when faced with subtle
concerns. By randomly selecting and discarding neurons during differences or overlapping symptoms between these classes.
training, the model avoids relying too heavily on specific neurons, While the proposed model demonstrates high accuracy and
promoting better generalization. This, in turn, ensures that the true positive rates across the four classes, the confusion matrix
model remains resilient when faced with diverse datasets. sheds light on specific challenges in differentiating between
Additionally, the incorporation of transfer learning, especially classes with visual similarities. The misclassifications, particularly
the adopted hybrid approach, significantly contributes to the between Broken and Silk cut, and Discolored and Pure, highlight
model’s success. The utilization of pre-trained models on extensive the intricacies of the task and suggest avenues for further
datasets, followed by fine-tuning on the specific fruit dataset, refinement, such as incorporating additional features or leveraging
empowers the model with a wealth of knowledge obtained more advanced techniques to enhance the model’s discriminatory
during training on related tasks. The hybrid transfer learning capabilities in visually challenging scenarios.
approach, involving training newly added layers initially and Table 2 presents a comprehensive overview of the evaluation
subsequently fine-tuning existing layers, facilitates the alignment metrics for the proposed model across the different classes
of the model with the characteristics of the specified dataset. This of the corn disease dataset: Broken, Discolored, Silk cut, and
strategic approach ensures that the model leverages its pre-existing Pure. These metrics, including Precision, Recall, F1-score, and
knowledge while adapting to the intricacies of the fruit dataset, Accuracy, provide a nuanced understanding of the model’s
ultimately enhancing its overall performance and accuracy. performance in terms of both correctness and completeness in
Figure 5 provides a detailed view of the confusion matrix classification. Precision, as outlined in the table, measures the
generated during the validation phase of the proposed model, accuracy of positive predictions, indicating the proportion of
offering insights into its performance across the four distinct classes instances correctly classified as belonging to a specific class. The
of the corn disease dataset: Broken, Discolored, Silk cut, and proposed model exhibits high precision values across all classes,
Pure. The diagonal elements of the confusion matrix represent the ranging from 0.949 for Silk cut to 0.975 for Pure. These high
true positive rates for each class, signifying the instances correctly precision values underscore the model’s effectiveness in minimizing
classified by the model. Notably, the proposed model excels in false positives, demonstrating its capability to accurately identify
accurately identifying instances of Broken, Discolored, Silk cut, and instances belonging to each specific corn disease class. The Recall
Pure, with high percentages of 96.3%, 95.9%, 96.7%, and 96.2%, values, also known as Sensitivity or True Positive Rate, represent
respectively. These high true positive rates underscore the model’s the proportion of actual positive instances correctly identified by
proficiency in recognizing and classifying instances of each specific the model. The proposed model achieves commendable Recall
corn disease category. values, ranging from 0.957 for Silk cut to 0.963 for Broken. These
However, a closer examination reveals some instances of values highlight the model’s ability to capture a significant portion
misclassification within the off-diagonal elements. For example, of the actual positive instances for each class, emphasizing its
there is a small percentage (1.2%) of instances belonging to sensitivity to detecting instances of corn diseases.
the Broken class that are misclassified as Discolored. Similarly, F1-score, a harmonic mean of Precision and Recall, provides
2.9% of instances from the Silk cut class are misclassified as a balanced metric that considers both false positives and false
Broken. These misclassifications could be attributed to the inherent negatives. The proposed model achieves high F1-scores across
resemblance between certain symptoms of Broken and Silk cut all classes, ranging from 0.953 for Silk cut to 0.968 for Pure.
diseases, creating challenges for the model in distinguishing These scores indicate a strong balance between precision and
between them accurately. Another noteworthy misclassification recall, showcasing the model’s overall effectiveness in achieving
occurs between the Discolored and Pure classes, with 2.6% of both accuracy and completeness in classification. The Accuracy
FIGURE 6
Feature maps visualization.
values, representing the overall correctness of the model across all a heightened focus emerges as the network zeroes in on edges
classes, are consistently high, ranging from 0.959 for Discolored and boundaries, delving into the structural intricacies of the corn.
to 0.967 for Silk cut. This demonstrates the model’s proficiency Finally, in the layers just before the last, the CNN, despite a slight
in making correct predictions for the entire dataset. The blurring effect from activation functions, meticulously dissects
evaluation metrics in Table 2 collectively highlight the proposed small and fine details, emphasizing its adeptness at capturing
model’s robust performance in classifying instances across various the nuanced features that define the Silk Cut image. This visual
corn disease classes. The high precision, recall, and F1-score narrative provides a profound insight into the CNN’s feature
values underscore its effectiveness in achieving accurate and learning process, showcasing its evolving understanding of the corn
comprehensive classification. The consistent accuracy values across image’s complexities.
all classes further reinforce the model’s overall reliability and
suitability for the specified task.
Publisher’s note
Funding
All claims expressed in this article are solely those of the
The author(s) declare financial support was received for the authors and do not necessarily represent those of their affiliated
research, authorship, and/or publication of this article. This work organizations, or those of the publisher, the editors and the
was supported by the Deanship of Scientific Research, Vice reviewers. Any product that may be evaluated in this article, or
Presidency for Graduate Studies and Scientific Research, King claim that may be made by its manufacturer, is not guaranteed or
Faisal University, Saudi Arabia, under the Project Grant 5,215. endorsed by the publisher.
References
1. Phupattanasilp P, Tong SR. Augmented reality in the integrative internet 16. Gulzar Y, Khan SA. Skin lesion segmentation based on vision transformers
of things (AR-IoT): application for precision farming. Sustainability. (2019) and convolutional neural networks—a comparative study. Appl Sci. (2022)
11:2658. doi: 10.3390/su11092658 12:5990. doi: 10.3390/app12125990
2. Kong J, Wang H, Wang X, Jin X, Fang X, Lin S, et al. Multi-stream 17. Mehmood A, Gulzar Y, Ilyas QM, Jabbari A, Ahmad M, Iqbal S, et al. SBXception:
hybrid architecture based on cross-level fusion strategy for fine-grained crop a shallower and broader xception architecture for efficient classification of skin lesions.
species recognition in precision agriculture. Comput Electron Agric. (2021) Cancers. (2023) 15:3604. doi: 10.3390/cancers15143604
185:106134. doi: 10.1016/j.compag.2021.106134
18. Archana K, Kaur A, Gulzar Y, Hamid Y, Mir MS, Soomro AB, et al. Deep learning
3. Hamid Y, Wani S, Soomro AB, Alwan AA, Gulzar Y. Smart seed classification models/techniques for COVID-19 detection: a survey. Front Appl Math Stat. (2023)
system based on MobileNetV2 architecture. In: Proceedings of the 2022 2nd 9:1303714. doi: 10.3389/fams.2023.1303714
International Conference on Computing and Information Technology (ICCIT). Tabuk:
19. Sahlan F, Hamidi F, Misrat MZ, Adli MH, Wani S, Gulzar Y, et al. Prediction of
IEEE (2022). p. 217–22. doi: 10.1109/ICCIT52419.2022.9711662
mental health among university students. Int J Percept Cogn Comput. (2021) 7:85–91.
4. World Corn Production 2022/2023. Available online at: http://www. Available online at: https://journals.iium.edu.my/kict/index.php/IJPCC/article/view/
worldagriculturalproduction.com/crops/corn.aspx (accessed July 3, 2023). 225
5. Watson A, Burgess LW, Summerell BA, O’Keeffe K. Fusarium species associated 20. Hanafi MFFM, Nasir MSFM, Wani S, Abdulghafor RAA, Gulzar Y, Hamid YA,
with cob rot of sweet corn and maize in New South Wales. Australas Plant Dis Notes. et al. Real time deep learning based driver monitoring system. Int J Percept Cogn
(2014) 9:1–4. doi: 10.1007/s13314-014-0142-1 Comput. (2021) 7:79–84. Available online at: https://journals.iium.edu.my/kict/index.
php/IJPCC/article/view/224
6. Sastry KS. Seed-Borne Plant Virus Diseases. New York, NY: Springer
(2013). doi: 10.1007/978-81-322-0813-6 21. Gulzar Y, Alwan AA, Abdullah RM, Abualkishik AZ, Oumrani MOCA. Ordered
clustering-based algorithm for E-commerce recommendation system. Sustainability.
7. Schmidt M, Horstmann S, De Colli L, Danaher M, Speer K, Zannini E. et al.
(2023) 15:2947. doi: 10.3390/su15042947
Impact of fungal contamination of wheat on grain quality criteria. J Cereal Sci. (2016)
69:95–103. doi: 10.1016/j.jcs.2016.02.010 22. Dhiman P, Kaur A, Balasaraswathi VR, Gulzar Y, Alwan AA, Hamid Y, et al.
Image acquisition, preprocessing and classification of citrus fruit diseases: a systematic
8. Franco-Duarte R, Cernáková L, Kadam S, Kaushik KS, Salehi B,
literature review. Sustainability. (2023) 15:9643. doi: 10.3390/su15129643
Bevilacqua A, et al. Advances in chemical and biological methods to
identify microorganisms—from past to present. Microorganisms. (2019) 23. Malik I, Ahmed M, Gulzar Y, Baba SH, Mir MS, Soomro AB, et al. Estimation
7:95–103. doi: 10.3390/microorganisms7050130 of the extent of the vulnerability of agriculture to climate change using analytical and
deep-learning methods: a case study in Jammu, Kashmir, and Ladakh. Sustainability.
9. Lu Z, Zhao M, Luo J, Wang G, Wang D. Design of a winter-jujube
(2023) 15:11465. doi: 10.3390/su151411465
grading robot based on machine vision. Comput Electron Agric. (2021)
186:106170. doi: 10.1016/j.compag.2021.106170 24. Ayoub S, Gulzar Y, Reegu FA, Turaev S. Generating image captions
using Bahdanau attention mechanism and transfer learning. Symmetry. (2022)
10. Li J, Wu J, Lin J, Li C, Lu H, Lin C, et al. Nondestructive identification of litchi
14:2681. doi: 10.3390/sym14122681
downy blight at different stages based on spectroscopy analysis. Agriculture. (2022)
12:402. doi: 10.3390/agriculture12030402 25. Khan F, Ayoub S, Gulzar Y, Majid M, Reegu FA, Mir MS, et al. MRI-based
effective ensemble frameworks for predicting human brain tumor. J Imaging. (2023)
11. Dhiman P, Bonkra A, Kaur A, Gulzar Y, Hamid Y, Mir MS, et al. Healthcare trust
9:163. doi: 10.3390/jimaging9080163
evolution with explainable artificial intelligence: bibliometric analysis. Information.
(2023) 14:541. doi: 10.3390/info14100541 26. Tian J, Zhang Y, Wang Y, Wang C, Zhang S, Ren TA, et al. Method of corn disease
identification based on convolutional neural network. In: Proceedings of the Proceedings
12. Hamid Y, Elyassami S, Gulzar Y, Balasaraswathi VR, Habuza T, Wani S, et al.
- 2019 12th International Symposium on Computational Intelligence and Design, ISCID
An improvised CNN model for fake image detection. Int J Inf Technol. (2022) 15:1–
(2019), Vol. 1. Hangzhou: IEEE (2019). p. 245–248. doi: 10.1109/ISCID.2019.00063
11. doi: 10.1007/s41870-022-01130-5
27. Mishra S, Sachan R, Rajpal D. Deep convolutional neural network based
13. Alam S, Raja P, Gulzar Y. Investigation of machine learning
detection system for real-time corn plant disease recognition. Proc Procedia Comput
methods for early prediction of neurodevelopmental disorders in children.
Sci. (2020) 167:2003–10. doi: 10.1016/j.procs.2020.03.236
Wirel Commun Mob Comput. (2022) 2022:5766386. doi: 10.1155/2022/57
66386 28. Gulzar Y, Hamid Y, Soomro AB, Alwan AA, Journaux LA. Convolution
neural network-based seed classification system. Symmetry. (2020)
14. Anand V, Gupta S, Gupta D, Gulzar Y, Xin Q, Juneja S, et al. Weighted
12:2018. doi: 10.3390/sym12122018
average ensemble deep learning model for stratification of brain tumor
in MRI images. Diagnostics. (2023) 13:1320. doi: 10.3390/diagnostics130 29. Yu H, Liu J, Chen C, Heidari AA, Zhang Q, Chen H, et al. Corn leaf diseases
71320 diagnosis based on K-means clustering and deep learning. IEEE Access. (2021)
9:143824–35. doi: 10.1109/ACCESS.2021.3120379
15. Khan SA, Gulzar Y, Turaev S, Peng YSA. Modified HSIFT descriptor
for medical image classification of anatomy objects. Symmetry. (2021) 30. Ahmad A, Saraswat D, Gamal AE, Johal GS. Comparison of deep learning models
13:1987. doi: 10.3390/sym13111987 for corn disease identification, tracking, and severity estimation using images acquired
from UAV-mounted and handheld sensors. In: Proceedings of the American Society Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE
of Agricultural and Biological Engineers Annual International Meeting, ASABE (2021), (2018). p. 4510–20. doi: 10.1109/CVPR.2018.00474
Vol. 3. (2021). p. 1572–1582. doi: 10.13031/aim.202100566
44. Tan M, Le QV. EfficientNet: rethinking model scaling for convolutional neural
31. Ahmad A, Aggarwal V, Saraswat D, El Gamal A, Johal GS. GeoDLS: a deep networks. In: 36th International Conference on Machine Learning, ICML 2019. Long
learning-based corn disease tracking and location system using RTK geolocated UAS Beach, CA (2019). p. 10691–700.
imagery. Remote Sens. (2022) 14:4140. doi: 10.3390/rs14174140
45. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale
32. Albarrak K, Gulzar Y, Hamid Y, Mehmood A, Soomro AB. A deep image recognition. In: 3rd International Conference on Learning Representations, ICLR
learning-based model for date fruit classification. Sustainability. (2022) 2015 - Conference Track Proceedings 2015. San Diego, CA (2015).
14:6339. doi: 10.3390/su14106339
46. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In:
33. Fraiwan M, Faouri E, Khasawneh, N. Classification of corn Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern
diseases from leaf images using deep transfer learning. Plants. (2022) Recognition. Las Vegas, NV: IEEE (2016). p. 770–8. doi: 10.1109/CVPR.2016.90
11:2668. doi: 10.3390/plants11202668
47. Ayoub S, Gulzar Y, Rustamov J, Jabbari A, Reegu FA, Turaev S, et al. Adversarial
34. Gulzar Y. Fruit image classification model based on MobileNetV2 with deep approaches to tackle imbalanced data in machine learning. Sustainability. (2023)
transfer learning technique. Sustainability. (2023) 15:1906. doi: 10.3390/su15031906 15:7097. doi: 10.3390/su15097097
35. Mamat N, Othman MF, Abdulghafor R, Alwan AA, Gulzar Y. Enhancing 48. Gulzar Y, Ünal Z, Aktas HA, Mir MS. Harnessing the power of transfer
image annotation technique of fruit classification using a deep learning approach. learning in sunflower disease detection: a comparative study. Agriculture. (2023)
Sustainability. (2023) 15:901. doi: 10.3390/su15020901 13:1479. doi: 10.3390/agriculture13081479
36. Aggarwal S, Gupta S, Gupta D, Gulzar Y, Juneja S, Alwan AA, et al. An 49. Khan F, Gulzar Y, Ayoub S, Majid M, Mir MS, Soomro AB, et al.
artificial intelligence-based stacked ensemble approach for prediction of protein Least square-support vector machine based brain tumor classification
subcellular localization in confocal microscopy images. Sustainability. (2023) system with multi model texture features. Front Appl Math Stat. (2023)
15:1695. doi: 10.3390/su15021695 9:1324054. doi: 10.3389/fams.2023.1324054
37. Masood M, Nawaz M, Nazir T, Javed A, Alkanhel R, Elmannai H, et al. MaizeNet: 50. Majid M, Gulzar Y, Ayoub S, Khan F, Reegu FA, Mir MS, et al. Enhanced transfer
a deep learning approach for effective recognition of maize plant leaf diseases. IEEE learning strategies for effective kidney tumor classification with CT imaging. Int J Adv
Access. (2023) 11:52862–76. doi: 10.1109/ACCESS.2023.3280260 Comput Sci Appl. (2023) 14:2023. doi: 10.14569/IJACSA.2023.0140847
38. Ahmad A, Gamal AE, Saraswat D. Toward generalization of deep learning-based 51. Javanmardi S, Miraei Ashtiani SH, Verbeek FJ, Martynenko A. Computer-vision
plant disease identification under controlled and field conditions. IEEE Access. (2023) classification of corn seed varieties using deep convolutional neural network. J Stored
11:9042–57. doi: 10.1109/ACCESS.2023.3240100 Prod Res. (2021) 92:101800. doi: 10.1016/j.jspr.2021.101800
39. Hatem AS, Altememe MS, Fadhel MA. Identifying corn leaves diseases by 52. Zhang J, Dai L, Cheng F. Corn seed variety classification based on hyperspectral
extensive use of transfer learning: a comparative study. Indones J Electr Eng Comput reflectance imaging and deep convolutional neural network. J Food Meas Charact.
Sci. (2023) 29:1030–8. doi: 10.11591/ijeecs.v29.i2.pp1030-1038 (2021) 15:484–94. doi: 10.1007/s11694-020-00646-3
40. Divyanth LG, Ahmad A, Saraswat D. A two-stage deep-learning based
53. Xu P, Fu L, Xu K, Sun W, Tan Q, Zhang Y, et al. Investigation
segmentation model for crop disease quantification based on corn field imagery. Smart
into maize seed disease identification based on deep learning and multi-
Agric Technol. (2023) 3:100108. doi: 10.1016/j.atech.2022.100108
source spectral information fusion techniques. J Food Compos Anal. (2023)
41. Rajeena FPP, Ashwathy SU, Moustafa SU, Ali MAS. Detecting plant disease in 119:105254. doi: 10.1016/j.jfca.2023.105254
corn leaf using EfficientNet architecture—an analytical approach. Electronics. (2023)
54. Sultan M, Shamshiri RR, Ahamed S, Farooq M, Lu L, Liu W, et al. Lightweight
12:1938. doi: 10.3390/electronics12081938
corn seed disease identification method based on improved ShuffleNetV2. Agriculture.
42. Nagar S, Pani P, Nair R, Varma G. Automated seed quality testing system using (2022) 12:1929. doi: 10.3390/agriculture12111929
GAN & active learning. arXiv [preprint]. (2021). doi: 10.48550/arXiv.2110.00777
55. Koeshardianto M, Agustiono W, Setiawan W. Classification of corn seed
43. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C. Mobilenetv2: inverted quality using residual network with transfer learning weight. Elinvo. (2023) 8:137–
residuals and linear bottlenecks. In: Proceedings of the Proceedings of the IEEE 45. doi: 10.21831/elinvo.v8i1.55763