0% found this document useful (0 votes)
15 views11 pages

A Novel Approach To Enhance Rice Foliar Disease Detection: Custom Data Generators, Advanced Augmentation, Hybrid Fine-Tuning, and Regularization Techniques With DenseNet121

This study presents a novel approach to enhance rice foliar disease detection using advanced data augmentation techniques and a hybrid fine-tuning method with DenseNet121. By employing a custom data generator that applies various augmentations, the model achieved an accuracy of 98.41% in classifying six rice leaf disease categories. The research highlights the importance of robust model training to improve detection accuracy and address challenges associated with limited data availability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views11 pages

A Novel Approach To Enhance Rice Foliar Disease Detection: Custom Data Generators, Advanced Augmentation, Hybrid Fine-Tuning, and Regularization Techniques With DenseNet121

This study presents a novel approach to enhance rice foliar disease detection using advanced data augmentation techniques and a hybrid fine-tuning method with DenseNet121. By employing a custom data generator that applies various augmentations, the model achieved an accuracy of 98.41% in classifying six rice leaf disease categories. The research highlights the importance of robust model training to improve detection accuracy and address challenges associated with limited data availability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

IAES International Journal of Robotics and Automation (IJRA)

Vol. 14, No. 2, June 2025, pp. 237~247


ISSN: 2722-2586, DOI: 10.11591/ijra.v14i2.pp237-247  237

A novel approach to enhance rice foliar disease detection:


custom data generators, advanced augmentation, hybrid fine-
tuning, and regularization techniques with DenseNet121

Govindarajan Subburaman, Mary Vennila Selvadurai


Post Graduate and Research Department of Computer Science, Presidency College, University of Madras, Chennai, India

Article Info ABSTRACT


Article history: Rice leaf diseases impact crop yield, leading to food shortages and economic
losses. Early, automated detection is essential but often hindered by accuracy
Received month dd, yyyy challenges. This study contributes to improving model robustness against
Revised month dd, yyyy diverse and adversarial inputs by proposing a custom data generator that
Accepted month dd, yyyy applies Albumentation-based advanced augmentations, such as Gaussian
blur, noise addition, brightness/contrast adjustments, and coarse dropout, to
enhance model generalization. Five deep learning architectures—simple
Keywords: convolutional neural network (CNN), ResNet50, EfficientNetB0, Inception
v3, and DenseNet121—were evaluated for classifying six categories:
Custom data generator bacterial blight, brown spot, leaf blast, leaf scald, narrow brown spot, and
Data augmentation healthy leaf. A hybrid model approach is proposed, fine-tuning the
DenseNet121 DenseNet121 model by unfreezing its last 20 layers, which balances transfer
Dropout learning benefits with domain-specific feature extraction. Regularization
L2 regularization techniques, including L2 regularization and a reduced dropout rate, are
Rice disease incorporated to control overfitting. Additionally, a custom learning rate
scheduler is proposed to promote stable training. DenseNet121 achieved the
highest performance, with an accuracy of 98.41%, demonstrating the
effectiveness of these advanced augmentation and tuning strategies in rice
leaf disease classification.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Govindarajan Subburaman
Post Graduate and Research Department of Computer Science, Presidency College, University of Madras
Chennai, India
Email: [email protected]

1. INTRODUCTION
Rice is a vital cereal grain that has significantly enhanced global food security over the past fifty
years [1], [2]. The global agricultural sector is encountering major challenges as the demand for food
production rises to support an expanding population [3]. One of the primary factors contributing to a
significant decline in rice production is disease, which can cause a reduction in yield by 40%–50%, or even
result in complete crop failure in severe cases [4]. Timely detection and identification of disease types are
crucial for ensuring rice production. Diagnosis and prevention of common rice diseases typically rely on
characteristic symptoms, requiring field expertise and experience. For non-experts, such as farmers who may
not be familiar with the timing and symptoms of rice diseases, there is a high risk of misjudgment, low
efficiency, and heavy reliance on experts. This often leads to delays in accurate disease management,
resulting in reduced rice yields [5]. This research focused on utilizing data augmentation techniques to
improve the classification of rice diseases. Data augmentation involves generating additional training
samples by applying various transformations to existing data, which helps enhance model performance by

Journal homepage: http://ijra.iaescore.com


238  ISSN: 2722-2586

increasing data diversity and reducing overfitting. This approach aims to address the challenge of limited data
availability in rice disease classification, thereby improving the accuracy and robustness of the classification
models.
Data augmentation techniques allow for an increase in the size of a dataset without the need to
collect additional data. These techniques generate new data from the original training set by applying basic
manipulations and advanced image transformations [6]. The most commonly used image manipulation
techniques include flipping, cropping, rotation, color transformation, and noise injection. These methods can
be applied individually or in combination to generate augmented images through various image formatting
approaches [7]. However, the shortage of images of infected plant leaves has historically been a major
obstacle to effective plant disease detection. Figure 1 illustrates some common widespread diseases.

Figure 1. Common rice leaf diseases

The proposed advanced data augmentation method generates sufficient and high-quality rice leaf
disease images, enhancing the diversity and robustness of the training data. This augmentation technique,
applied dynamically via a custom data generator, significantly improves the performance of various deep
learning models.
The key contributions of this paper can be outlined as follows.
a. Custom data generator with advanced augmentation: Instead of employing standard augmentation
methods, the code leverages Albumentation, a versatile and efficient augmentation library that enables
complex transformations. The custom data generator applies these augmentations dynamically during
training, enhancing the model’s robustness and handling of diverse image conditions.
b. Hybrid model approach with fine-tuning: By unfreezing the last 20 layers of the DenseNet121 base
model, the code fine-tunes the pretrained network to the specific rice leaf disease dataset. This approach
balances the advantages of transfer learning with the need for domain-specific feature extraction,
allowing the model to adapt more effectively to the nuances of the dataset.
c. Regularization and dropout adjustments: The integration of L2 regularization on the dense layer,
combined with a lower dropout rate in the classification head, demonstrates a well-structured method to
control overfitting. This adjustment is crucial given the complexity of DenseNet121 and the modest size
of the dataset.
d. Custom learning rate scheduler: The use of a learning rate reduction callback, designed to reduce the
learning rate more gradually and with a higher patience threshold, ensures training stability. This
approach is particularly beneficial for deep networks like DenseNet121, promoting better convergence
and performance over time.

2. RELATED WORKS
Data augmentation has been widely employed by many researchers in machine learning and deep
learning approaches to enhance model performance, improve generalization, and address challenges related

IAES Int J Rob & Autom, Vol. 14, No. 2, June 2025: 237-247
IAES Int J Rob & Autom ISSN: 2722-2586  239

to limited or imbalanced datasets. Krishnamoorthy et al. [8] used original training dataset was expanded
using image augmentation techniques such as rotation, vertical and horizontal flipping, shearing, and random
zooming. These real-time augmentations were implemented using the image data generator class from the
keras deep learning library. Haque et al. [9] labeled and uploaded images to platforms like Roboflow for
augmentation to enhance the volume and diversity of the dataset. The primary objective of data augmentation
was to mitigate overfitting, particularly in small datasets, by generating new images through techniques such
as flipping, cropping, and altering color spaces. Qi et al. [10] focused on the automatic identification of
groundnut leaf diseases using a stack ensemble approach. The research aimed to classify four types of
groundnut leaf diseases by combining deep learning models with traditional machine learning techniques.
Deep neural networks, specifically ResNet50 and DenseNet121, demonstrated superior performance in
predicting the dataset. The highest accuracy achieved with data augmentation was 97.59%. Among the
models, ResNet50 showed the best identification performance when combined with the logistic regression
(LR) model. Salini et al. [11] reduce pesticide usage in agriculture while enhancing the quality and quantity
of crop yields. Image processing techniques are utilized for feature extraction, and classification is performed
using support vector machine (SVM). To further improve model performance and results, data augmentation
is employed. Hasan et al. [12] focus to improve the identification and classification of diseases, utilizing
enhanced methodologies can significantly reduce false classifications and optimize performance. Expanding
the dataset with additional images through augmentation and fine-tuning the machine learning model's
parameters can further boost classification accuracy. Liu et al. [13] used the dataset generated by Leaf GAN
achieved a higher average recognition accuracy compared to traditional data augmentation techniques and
other GAN-based methods. When tested on the Xception model, it resulted in an average test accuracy of
98.70%. This study applied an 8-fold rotation data augmentation [14] strategy, rotating each training image in
45° increments from 0° to 360°. Zhang et al. [15] achieved a recognition accuracy improvement of 4.57%
with ResNet18 and 4.1% with VGG11 when compared to results without data augmentation. Additionally,
when compared to the conventional WGAN-GP data augmentation technique, accuracy increased by 3.08%
with ResNet18 and 3.55% with VGG11. Waheed et al. [16] implemented a DenseNet model that
demonstrated impressive performance, achieving 98.06% accuracy in identifying three types of corn leaf
diseases. The use of data augmentation techniques helped to increase the dataset size, thereby enhancing the
model's generalization capabilities. A diverse set of plant leaf disease datasets was created [17] using a
combination of basic image manipulation and advanced deep learning augmentation techniques, including
image flipping, cropping, rotation, color transformation, principal component analysis (PCA) color
augmentation, noise injection, GANs, and neural style transfer (NST). The effectiveness of these
augmentation methods was evaluated with leading transfer learning models such as VGG16, ResNet, and
Inception v3. A regularization technique was applied to classify rice diseases from leaf images with a limited
dataset, yielding better results than a standard convolutional neural network (CNN) model, as demonstrated
by an average accuracy rate of 85.878% [18]. Haruna et al. [19] trained StyleGAN2-ADA for 250 epochs,
utilizing the variance of the Laplacian filter to eliminate blurry or poorly generated images. These
synthesized images were then used to augment Faster R-CNN and species sensitivity distribution (SSD)
models for detecting rice leaf diseases. Ritharson et al. [20] used models for experiment—Xception,
DenseNet121, Inception-Resnet-v2, Inception v3, ResNet50, and VGG16—were selected based on their
strong performance in various applications for data classification using pre-trained networks and weight
decay (L2 regularization). To enhance prediction accuracy and mitigate overfitting with limited training data,
Bi and Hu [21] proposed a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP)
that was combined with Label Smoothing Regularization (LSR). The studies in [21]–[27] used three different
architectures. DenseNet201 achieved an average accuracy of 89.86% on the non-normalized dataset, 88.33%
on the normalized augmented dataset, and 83.41% on the non-normalized augmented dataset. GoogleNet
recorded the lowest accuracy of 83.87% on the non-normalized dataset, while AlexNet had the lowest
accuracies of 82.38% and 79.72% on the normalized and non-normalized augmented datasets, respectively.

3. PROPOSED METHODOLOGY
In the existing methodology, as illustrated in Figure 2, the dataset is first collected and split into
training and test sets. Basic augmentation techniques, such as rotations, flips, and scaling, are applied to the
training images to increase data variability. The augmented data is then fed into a model, which is trained to
classify different rice leaf diseases. The model processes the images and categorizes them into predefined
disease classes based on the features extracted during training.
The proposed methodology begins with data collection and preprocessing, where images of rice
leaves from various disease categories are loaded and resized to ensure compatibility with the model's input.
As shown in Figure 3, the dataset is divided into training, validation, and test sets, with the labels one-hot
encoded for multi-class classification.
A novel approach to enhance rice foliar disease detection: custom data … (Govindarajan Subburaman)
240  ISSN: 2722-2586

The rice leaf image dataset utilized in this research was obtained through the Kaggle API. In this
study, the dataset comprises six categories, as depicted in Figure 1. The technique of advanced image
augmentation was applied to increase the size of the original training dataset by creating variations of the
existing images. In this research, several advanced augmentation techniques were explored, as detailed.

Figure 2. Existing systematic approach for identifying rice leaf diseases

Figure 3. Proposed systematic approach for identifying rice leaf diseases

a. Random rotate 90: this rotates the image by multiples of 90 degrees. The transformation matrix for a 90-
degree rotation counterclockwise is:

𝑥′ 0 −1 𝑥
( ′) = ( ) (𝑦 ) (1)
𝑦 1 0

IAES Int J Rob & Autom, Vol. 14, No. 2, June 2025: 237-247
IAES Int J Rob & Autom ISSN: 2722-2586  241

b. Horizontal flip: reflects the image across the vertical axis. The transformation matrix is:

𝑥′ −1 0 𝑥
( ′) = ( )( ) (2)
𝑦 0 1 𝑦

c. Vertical flip: reflects the image across the horizontal axis. The transformation matrix is:

𝑥′ 1 0 𝑥
( ′) = ( ) (𝑦 ) (3)
𝑦 0 −1

d. Transpose: transpose operation swaps the x and y coordinates of the image:

𝑥′ 0 1 𝑥
( ′) = ( )( ) (4)
𝑦 1 0 𝑦

e. Gaussian blur: Gaussian blur applies a filter using a Gaussian function. This is represented as a
convolution operation:

𝐼 ′ (𝑥, 𝑦) = ∑𝑘𝑖=−𝑘 ∑𝑘𝑗=−𝑘 𝐼(𝑥 − 𝑖, 𝑦 − 𝑗)𝐺(𝑖, 𝑗) (5)

where 𝐺(𝑖, 𝑗) is the Gaussian kernel defined as:

𝑖2 +𝑗2
1 −
𝐺(𝑖, 𝑗) = 2𝑒
2𝜎2 (6)
2𝜋𝜎

where 𝜎 controls the blur intensity.


f. Gauss noise: adds random noise to each pixel value. For a given pixel 𝐼(𝑥, 𝑦), the noise is added as:

𝐼 , (𝑥, 𝑦) = 𝐼(𝑥, 𝑦) + 𝑁(0, 𝜎) (7)

where 𝑁(0, 𝜎) is a Gaussian random variable with mean 0 and standard deviation 𝜎 controlling the noise
level.
g. Random brightness contrast: this operation adjusts both brightness and contrast. Brightness: for each pixel
𝐼(𝑥, 𝑦), brightness adjustment is:

𝐼 , (𝑥, 𝑦) = 𝐼(𝑥, 𝑦) + ∆𝐵 (8)

where ∆𝐵 is the random brightness change factor. Contrast: for contrast adjustment:

𝐼 , (𝑥, 𝑦) = (𝐼(𝑥, 𝑦) − 128). 𝐶 + 128 (9)

where 𝐶 is a contrast factor, and 128 is the midpoint pixel value (for 8-bit images). CoarseDropout:
coarse dropout randomly fills some patches of the image with a fixed value (usually black). For a patch
(𝑥1 , 𝑦1 ) to (𝑥2 , 𝑦2 ), the (10) is:

𝐼(𝑥, 𝑦) = fill_value ∀(𝑥1 ≤ 𝑥 ≤ 𝑥2 , 𝑦1 ≤ 𝑦 ≤ 𝑦2 ) (10)

h. ShiftScaleRotate: this combines shifting, scaling, and rotating. Shift: for a shift by(𝑡𝑥 , 𝑡𝑦 ), the new pixel
coordinates are:

𝑥 ′ = 𝑥 + 𝑡𝑥 , 𝑦 ′ = 𝑦 + 𝑡𝑦 (11)

i. Scale: scaling by factors 𝑠𝑥 , 𝑠𝑦 changes the pixel positions as:

𝑥 ′ = 𝑠𝑥 . 𝑥, 𝑦 ′ = 𝑠𝑦 . 𝑦 (12)

j. Rotation: for rotation by 𝜃degrees, the new coordinates are given by:

A novel approach to enhance rice foliar disease detection: custom data … (Govindarajan Subburaman)
242  ISSN: 2722-2586

𝑥′ cos (𝜃) −sin (𝜃) 𝑥


( ′) = ( ) (𝑦 ) (13)
𝑦 sin (𝜃) cos (𝜃)

k. Resize: resizing the image changes its resolution. For an image with original size (𝑤0 , ℎ0 ) resizing it to a
new size (𝑤𝑛 , ℎ𝑛 ) can be represented as:
𝑤𝑛 ℎ𝑛
𝑥 ′ = 𝑥. , 𝑦′ = 𝑦 . (14)
𝑤0 ℎ0

This scales both x and y coordinates to the new target size (224224). Several key evaluation metrics were
employed, each providing valuable insights into the performance of the models.

4. RESULTS AND DISCUSSION


In this study, various models were evaluated for the task of rice leaf disease classification using
training and test sets that contain 1,470 and 630 instances, respectively. The models compared include a
simple CNN, Inception v3, ResNet50, EfficientNetB0, and DenseNet121. Each model was assessed based on
key metrics such as accuracy, loss, precision, recall, F1 score, AUC-ROC, Matthews correlation coefficient
(MCC), and hamming loss. The models were also analyzed class-wise to understand their performance on
individual disease classes as in Table 1.

Table 1. Overall performance comparison


Model Accuracy Loss Precision Recall F1 Score AUC-ROC MCC Hamming loss
Simple CNN 0.924 0.678 0.924 0.924 0.923 0.9942 0.909 0.0762
EfficientNetB0 0.971 0.504 0.972 0.971 0.971 0.9987 0.966 0.0286
ResNet50 0.976 0.504 0.977 0.976 0.976 0.9994 0.972 0.0238
Inception v3 0.978 0.500 0.978 0.978 0.978 0.9992 0.973 0.0222
Proposed DenseNet121 0.984 0.278 0.984 0.984 0.984 0.9996 0.981 0.0159

Figure 4 presents a visual comparison of the training and validation accuracy across the models.
Figure 4(a) through Figure 4(e) respectively depict the performance of the simple CNN, EfficientNetB0,
ResNet50, Inception v3, and the proposed DenseNet121. The figure clearly illustrates the significant
improvements in accuracy and convergence behavior achieved by DenseNet121 compared to other models.
Figure 5 compares the training and validation loss of the models. Figure 5(a) shows simple CNN,
slow convergence and the highest validation loss. Figure 5(b) shows that EfficientNetB0 reduces loss
moderately but requires more epochs to stabilize. Figure 5(c) shows that ResNet50 achieves steady training
loss reduction but exhibits fluctuating validation loss, indicating overfitting. Figure 5(d) shows that
InceptionV3 demonstrates robust loss reduction with stable validation loss, showcasing a balanced
architecture. Finally, Figure 5(e) shows that DenseNet121 achieves the lowest final validation loss with
consistent improvement, highlighting superior generalization.
Figure 6 highlights the performance metrics across models, focusing on test accuracy and test loss.
Figure 6(a) compares the test accuracy, where DenseNet121 achieves the highest at 98.41%, followed by
ResNet50 (97.61%), Inception v3 (97.78%), and EfficientNetB0 (97.14%). Figure 6(b) compares test loss,
with DenseNet121 demonstrating the lowest at 0.2777, indicating excellent generalization, while ResNet50,
Inception v3, and EfficientNetB0 show higher losses of 0.5036, 0.4996, and 0.5042, respectively.
The confusion matrix comparison highlights the performance of different models in identifying and
diagnosing various rice disease classes. The comparison of CNN models highlights their classification
accuracy across six classes (CL1 to CL6). Simple CNN shows moderate performance with notable
misclassifications, particularly in CL2 and CL6. EfficientNetB0 improves with higher accuracies, especially
for CL2 and CL6. ResNet50 demonstrates exceptional accuracy, achieving over 95% for all classes, with
minimal misclassification. Inception v3 delivers comparable results to ResNet50, with slightly more
misclassifications in CL4 and CL5. DenseNet121 also excels, matching ResNet50 and Inception v3 in overall
accuracy, with slight misclassification in CL4 and CL5. ResNet50 and DenseNet121 emerge as the most
reliable models for accurate classification across all classes. Inception v3 also achieves high accuracy, with
minimal misclassifications, primarily in CL2 and CL4. The proposed DenseNet121 model demonstrates
superior performance, accurately classifying all instances of CL1, CL3, and CL6 with minimal errors in the
remaining classes. Overall, DenseNet121 achieves the best results, highlighting its robustness and precision
in classifying rice leaf diseases compared to the other models.

IAES Int J Rob & Autom, Vol. 14, No. 2, June 2025: 237-247
IAES Int J Rob & Autom ISSN: 2722-2586  243

(a) (b)

(c) (d)

(e)

Figure 4. Comparison of training and validation accuracy of (a) simple CNN (b) EfficientNetB0,
(c) RestNet50 (d) Inception v3 (e) proposed DenseNet121

A novel approach to enhance rice foliar disease detection: custom data … (Govindarajan Subburaman)
244  ISSN: 2722-2586

(a) (b)

(c) (d)

(e)

Figure 5. Comparison of training and validation loss of (a) simple CNN, (b) EfficientNetB0, (c) RestNet50
(d) Inception v3, (e) proposed DenseNet121

IAES Int J Rob & Autom, Vol. 14, No. 2, June 2025: 237-247
IAES Int J Rob & Autom ISSN: 2722-2586  245

(a)

(b)

Figure 6. Comparison of (a) test accuracy and (b) test loss with different models

5. CONCLUSION
The proposed method demonstrates the effectiveness of using advanced deep learning architectures
combined with sophisticated image augmentation techniques for rice leaf disease classification. By
evaluating five models: Simple CNN, EfficientNetB0, ResNet50, Inception v3, and DenseNet121, the study
confirms that DenseNet121 achieved the highest accuracy of 98.41%, with superior precision, recall, and F1
scores across six disease classes. The use of a custom data generator with advanced augmentations, including
Gaussian blur, noise addition, and brightness/contrast adjustments, proved crucial in enhancing model
robustness and generalization. Furthermore, the integration of L2 regularization, dropout strategies, and a
custom learning rate scheduler contributed to reducing overfitting and improving convergence. The results
underline the importance of advanced data preprocessing and regularization in achieving high-performance
disease classification models, with DenseNet121 emerging as the most reliable architecture for this task.
These findings offer significant contributions to automated agricultural disease detection, paving the way for
more effective and scalable solutions in precision farming. Future work will involve exploring cutting-edge
deep learning architectures, such as vision transformers (ViT) or hybrid models combining CNNs and
attention mechanisms, which could lead to even higher accuracy and robustness. The integration of
explainable artificial intelligence (XAI) techniques would also enhance the interpretability of these models,
allowing farmers and agricultural experts to understand the reasoning behind the classification decisions.

FUNDING INFORMATION
This research received no specific grant from any funding agency in the public, commercial, or not-
for-profit sectors.

A novel approach to enhance rice foliar disease detection: custom data … (Govindarajan Subburaman)
246  ISSN: 2722-2586

AUTHOR CONTRIBUTIONS STATEMENT


This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author
contributions, reduce authorship disputes, and facilitate collaboration.

Name of Author C M So Va Fo I R D O E Vi Su P Fu
Govindarajan S ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Mary Vennila S ✓ ✓ ✓ ✓ ✓ ✓ ✓

C : Conceptualization I : Investigation Vi : Visualization


M : Methodology R : Resources Su : Supervision
So : Software D : Data Curation P : Project administration
Va : Validation O : Writing - Original Draft Fu : Funding acquisition
Fo : Formal analysis E : Writing - Review & Editing

CONFLICT OF INTEREST STATEMENT


The authors declare that they have no conflicts of interest regarding the publication of this paper.

DATA AVAILABILITY
The datasets used and analyzed during the current study are available from the corresponding author
upon reasonable request.

REFERENCES
[1] M. Behnassi, M. B. Baig, M. T. Sraïri, A. A. Alsheikh, and A. W. A. A. Risheh, “Food security and climate-climate smart food
systems—an introduction,” in Food Security and Climate-Smart Food Systems, Cham: Springer International Publishing, 2022,
pp. 1–13. doi: 10.1007/978-3-030-92738-7_1.
[2] P. Falsafi, M. B. Baig, M. R. Reed, and M. Behnassi, “The nexus of climate change, food security, and agricultural extension in
Islamic Republic of Iran,” in Food Security and Climate-Smart Food Systems, Cham: Springer International Publishing, 2022, pp.
241–261. doi: 10.1007/978-3-030-92738-7_12.
[3] N. Zhang, M. Wang, and N. Wang, “Precision agriculture—a worldwide overview,” Computers and Electronics in Agriculture,
vol. 36, no. 2–3, pp. 113–132, Nov. 2002, doi: 10.1016/S0168-1699(02)00096-0.
[4] R. N. Strange and P. R. Scott, “Plant disease: A threat to global food security,” Annual Review of Phytopathology, vol. 43, no. 1,
pp. 83–116, Sep. 2005, doi: 10.1146/annurev.phyto.43.113004.133839.
[5] Z.-X. Guan, J. Tang, B.-J. Yang, Y.-F. Zhou, D. Fan, and Q. Yao, “Study on recognition method of rice disease based on image,”
Chinese Journal of Rice Science, vol. 24, no. 5, 2010.
[6] M. Cordts et al., “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, Jun. 2016, vol. 2016-Decem, pp. 3213–3223. doi:
10.1109/CVPR.2016.350.
[7] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” Prepr.
arXiv.1608.02192v1, Aug. 2016.
[8] K. N, L. V Narasimha Prasad, C. S. Pavan Kumar, B. Subedi, H. B. Abraha, and S. V E, “Rice leaf diseases prediction using deep
neural networks with transfer learning,” Environmental Research, vol. 198, Jul. 2021, doi: 10.1016/j.envres.2021.111275.
[9] M. E. Haque, A. Rahman, I. Junaeid, S. U. Hoque, and M. Paul, “Rice leaf disease classification and detection using YOLOv5,”
Prepr. arXiv.2209.01579, Sep. 2022.
[10] H. Qi, Y. Liang, Q. Ding, and J. Zou, “Automatic identification of peanut-Leaf diseases based on stack ensemble,” Applied
Sciences, vol. 11, no. 4, Feb. 2021, doi: 10.3390/app11041950.
[11] R. Salini, A. Farzana, and B. Yamini, “Pesticide suggestion and crop disease classification using machine learning,” IRJET on
Computer Science Journal, vol. 11, no. 4, pp. 27997–27999, 2021.
[12] M. M. Hasan, A. F. M. S. Uddin, M. R. Akhond, M. J. Uddin, M. A. Hossain, and M. A. Hossain, “Machine learning and image
processing techniques for rice disease detection: A critical analysis,” International Journal of Plant Biology, vol. 14, no. 4, pp.
1190–1207, Dec. 2023, doi: 10.3390/ijpb14040087.
[13] B. Liu, C. Tan, S. Li, J. He, and H. Wang, “A data augmentation method based on generative adversarial networks for grape leaf
disease identification,” IEEE Access, vol. 8, pp. 102188–102198, 2020, doi: 10.1109/ACCESS.2020.2998839.
[14] Y. Zhu et al., “TA-CNN: Two-way attention models in deep convolutional neural network for plant recognition,”
Neurocomputing, vol. 365, pp. 191–200, Nov. 2019, doi: 10.1016/j.neucom.2019.07.016.
[15] Z. Zhang, Q. Gao, L. Liu, and Y. He, “A high-quality rice leaf disease image data augmentation method based on a dual GAN,”
IEEE Access, vol. 11, pp. 21176–21191, Nov. 2023, doi: 10.1109/ACCESS.2023.3251098.
[16] A. Waheed, M. Goyal, D. Gupta, A. Khanna, A. E. Hassanien, and H. M. Pandey, “An optimized dense convolutional neural
network model for disease recognition and classification in corn leaf,” Computers and Electronics in Agriculture, vol. 175, Aug.
2020, doi: 10.1016/j.compag.2020.105456.
[17] J. Arun Pandian, G. Geetharamani, and B. Annette, “Data augmentation on plant Leaf disease image dataset using image
manipulation and deep learning techniques,” in 2019 IEEE 9th International Conference on Advanced Computing (IACC), Dec.
2019, pp. 199–204. doi: 10.1109/IACC48062.2019.8971580.
[18] S. Mujahidin, N. F. Azhar, and B. Prihasto, “Analysis of using regularization technique in the convolutional neural network
architecture to detect paddy disease for small dataset,” Journal of Physics: Conference Series, vol. 1726, no. 1, Jan. 2021, doi:

IAES Int J Rob & Autom, Vol. 14, No. 2, June 2025: 237-247
IAES Int J Rob & Autom ISSN: 2722-2586  247

10.1088/1742-6596/1726/1/012010.
[19] Y. Haruna, S. Qin, and M. J. Mbyamm Kiki, “An improved approach to detection of rice Leaf disease with GAN-based data
augmentation pipeline,” Applied Sciences, vol. 13, no. 3, Jan. 2023, doi: 10.3390/app13031346.
[20] P. I. Ritharson, K. Raimond, X. A. Mary, J. E. Robert, and A. J, “DeepRice: A deep learning and deep feature based classification
of rice leaf disease subtypes,” Artificial Intelligence in Agriculture, vol. 11, pp. 34–49, Mar. 2024, doi:
10.1016/j.aiia.2023.11.001.
[21] L. Bi and G. Hu, “Improving image-based plant disease classification with generative adversarial network under limited training
set,” Frontiers in Plant Science, vol. 11, Dec. 2020, doi: 10.3389/fpls.2020.583438.
[22] Q. H. Cap, H. Uga, S. Kagiwada, and H. Iyatomi, “LeafGAN: An effective data augmentation method for practical plant disease
diagnosis,” IEEE Transactions on Automation Science and Engineering, vol. 19, no. 2, pp. 1258–1267, Apr. 2022, doi:
10.1109/TASE.2020.3041499.
[23] G. Latif, S. E. Abdelhamid, R. E. Mallouhy, J. Alghazo, and Z. A. Kazimi, “Deep learning utilization in agriculture: Detection of
rice plant diseases using an improved CNN model,” Plants, vol. 11, no. 17, Aug. 2022, doi: 10.3390/plants11172230.
[24] A. K. Abasi, S. N. Makhadmeh, O. A. Alomari, M. Tubishat, and H. J. Mohammed, “Enhancing rice leaf disease classification: A
customized convolutional neural network approach,” Sustainability, vol. 15, no. 20, Oct. 2023, doi: 10.3390/su152015039.
[25] M. Xu, S. Yoon, A. Fuentes, J. Yang, and D. S. Park, “Style-consistent image translation: A novel data augmentation paradigm to
improve plant disease recognition,” Frontiers in Plant Science, vol. 12, Feb. 2022, doi: 10.3389/fpls.2021.773142.
[26] N. Krishnamoorthy and V. R. L. Parameswari, “Rice leaf disease detection via deep neural networks with transfer learning for
early identification,” Turkish Journal of Physiotherapy and Rehabilitation, vol. 32, no. 2, pp. 1087–1097, 2021.
[27] J. G. Arnal Barbedo, “Plant disease identification from individual lesions and spots using deep learning,” Biosystems Engineering,
vol. 180, pp. 96–107, Apr. 2019, doi: 10.1016/j.biosystemseng.2019.02.002.

BIOGRAPHIES OF AUTHORS

Govindarajan Subburaman completed B.Sc. in computer science (2009) from


Bharathidasan University, Tiruchirappalli, India and M.Sc. in computer science (2012) from
Bharathidasan University, Tiruchirappalli, India. He completed his M.Phil. (computer science)
in the year 2014 from Bharathidasan University, Tiruchirappalli, India and currently pursing
Ph.D. in computer science from Presidency College, Chennai. His area of interest include data
mining, cloud computing, machine learning and deep learning. He can be contacted at
[email protected].

Mary Vennila Selvadurai completed B.Sc. in physics (1986) and M.Sc. in


computer science (1990) from Bharathidasan University, Tiruchirappalli, India. She obtained
her M.Phil (computer science) in the year 2002 and Ph.D. in computer science (2009) from
Mother Teresa Women’s University, Kodaikanal. She is currently working as an associate
professor and head of the PG and research department of computer science, Presidency
College, Chennai, India. Her areas of interest include networks, grid computing, cloud security
and data mining. She can be contacted at [email protected].

A novel approach to enhance rice foliar disease detection: custom data … (Govindarajan Subburaman)

You might also like