0% found this document useful (0 votes)
22 views6 pages

726 Camera Ready

This paper presents a novel approach using Super-Resolution Generative Adversarial Networks (SR-GAN) to enhance coffee bean defect detection by improving image quality from low-resolution inputs. The proposed model combines SR-GAN with Convolutional Neural Networks (CNNs) to accurately classify defects in coffee beans, achieving significant improvements in resolution and classification accuracy compared to existing methods. Performance evaluations demonstrate that the integrated approach effectively captures subtle defects, providing a robust solution for quality control in coffee production.

Uploaded by

Sam Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views6 pages

726 Camera Ready

This paper presents a novel approach using Super-Resolution Generative Adversarial Networks (SR-GAN) to enhance coffee bean defect detection by improving image quality from low-resolution inputs. The proposed model combines SR-GAN with Convolutional Neural Networks (CNNs) to accurately classify defects in coffee beans, achieving significant improvements in resolution and classification accuracy compared to existing methods. Performance evaluations demonstrate that the integrated approach effectively captures subtle defects, providing a robust solution for quality control in coffee production.

Uploaded by

Sam Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Enhanced Coffee Bean Defect Detection Using Super-Resolution

GANs and Deep Learning


1S.Prabu 2P.S.Thanigaivelu
Department of Computing Technologies, School of computing Department of Computing Technologies, School of computing
SRM Institute of Science and Technology SRM Institute of Science and Technology
Kattankulathur, Chennai, TamilNadu, Kattankulathur, Chennai, TamilNadu,
India – 603203 India – 603203
[email protected] [email protected]

3 4
S. Poonkodi M. Anand
Department of Computing Technology, School of computing Department of Computing Technology, School of computing
SRM Institute of Science and Technology SRM Institute of Science and Technology
Kattankulathur, Chennai, TamilNadu, Kattankulathur, Chennai, TamilNadu,
India – 603203 India – 603203
[email protected] [email protected]

Abstract offering sharper and more realistic images by effectively


Image Super-Resolution (SR) is a key challenge in capturing fine textures and details [9][10].
computer vision, focusing on enhancing low-resolution Despite these advancements, challenges persist in
images while preserving details. This paper presents a novel achieving high-resolution images with realistic textures and
method using Generative Adversarial Networks (GANs) minimal artifacts [11]. Current models often optimize pixel-
for improving image quality, specifically targeting coffee wise loss functions, which compromise perceptual quality, and
bean defect detection. The proposed SR-GAN model struggle to generalize across diverse image conditions. GAN-
includes a generator that upscales low-resolution inputs based SR models, while promising, face limitations like
and a discriminator that differentiates between generated training instability, difficulty in capturing fine details, and
high-resolution images and real ones. Trained on a coffee artifact generation [12][13]. This research aims to overcome
bean dataset, the model optimizes loss functions to balance these challenges by developing a robust GAN-based SR model
perceptual quality and fidelity, capturing subtle defects like that balances fidelity and perceptual quality, enhancing image
cracks and discolorations. Enhanced images are processed resolution and providing a reliable solution for industries
by two Convolutional Neural Networks (CNNs): one requiring high-quality image restoration.
identifies broad defect categories, such as defective black
beans, brown beans, broken beans and non-defective high- 1.1 Contributions
quality beans. This layered approach ensures more precise The following are the contributions of the paper:
defect identification and classification, overcoming  Implements an SR-GAN model for image super-resolution,
limitations of previous methods that struggled with subtle specifically designed to enhance coffee bean images for defect
defects. Performance evaluations show that SR-GAN and detection.
CNN significantly outperforms existing models in  Combines SR-GAN with Convolutional Neural Networks
resolution enhancement and classification accuracy, (CNNs) to accurately identify and classify defect types, such as
offering a robust solution for improving quality control in black, brown, broken, and high-quality beans.
coffee production.  Demonstrates superior performance in resolution
Keywords: Image Super-Resolution, Deep Learning, enhancement and classification accuracy compared to existing
GANs, Image Processing, Computer Vision. models.
 Provides a scalable solution for improving quality control
1. INTRODUCTION in coffee production, with potential applications in other
Image super-resolution (SR) has become a key focus image-based quality analysis fields.
in computer vision, aiming to reconstruct high-resolution The paper is organized as follows: Section 2 provides a detailed
images from low-resolution inputs, which is vital for
overview of related work, discussing previous methods in
applications like medical imaging, satellite imaging, and video
image enhancement. Section 3 presents the proposed SR-GAN
surveillance [1][2][3]. Traditional interpolation techniques
often fail to recover fine details, resulting in blurry outputs model, explaining the architecture, training process, and dataset
[4][5]. The introduction of deep learning, particularly used for model development and outlines the integration of the
convolutional neural networks (CNNs), SR-GAN with Convolutional Neural Networks (CNNs) for
has significantly improved SR performance by learning defect classification, including black, brown, broken, and high-
complex mappings [6][7]. However, CNN-based methods quality beans. Section 4 discusses the results, comparing the
often produce over-smoothed results, lacking high-frequency performance of the SR-GAN and CNN combination to existing
details [8]. To address these issues, Generative Adversarial models. Finally, Section 5 concludes the paper, summarizing
Networks (GANs) have been integrated into SR frameworks, key contributions and suggesting directions for future research.

979-8-3315-3038-9/25/$31.00 ©2025 IEEE


2. RELATED WORKS learning techniques. It begins with the input dataset, which
Lin et al. [14] proposed an unsupervised image super- consists of coffee bean images. These images serve as the
resolution method using a Generative Adversarial Network foundation of the process, capturing key visual data required
(GAN), achieving results comparable to state-of-the-art for analysis. The goal of the entire workflow is to identify
supervised techniques. Their model includes a generator for distinct features of the beans, such as color, texture, and quality,
recovering high-resolution images, a discriminator for using advanced image processing and machine learning
distinguishing real from generated images, and optimization methods to classify them into specific quality categories.
combining data error, regularization, and adversarial loss, all 3.1 Image processing
without requiring labeled training data. While effective, the Defects in coffee beans are detected through a multi-step
method has limitations in handling diverse image types and process that involves image segmentation, feature extraction,
complex textures, and it struggles to capture finer details that image enhancement, and machine learning classification. The
supervised approaches achieve. process begins with image segmentation using the Canny
Le-Tien et al. [15] reviewed the use of GANs for image super- method, an edge detection technique that isolates individual
resolution, focusing on ESRGAN and RRDN, which coffee beans in the dataset by identifying the boundaries of
demonstrated precise results across datasets. Despite their each bean based on changes in intensity. This segmentation
effectiveness, the study noted challenges such as high ensures that each bean is separated from the background and
computational cost and training complexity. Chauhan et al. other beans, allowing for focused analysis of individual beans.
[16] reviewed deep learning-based super-resolution By isolating the beans, the system can zero in on their physical
techniques, highlighting challenges like handling real-world characteristics, such as shape, color, and size, which are
noise, performance drops with higher upscaling factors, and essential for identifying defects.
the inefficiency of complex GAN models compared to simpler
ones for smaller scales. Parekh et al. [17] proposed a GAN- 3.2 Segmentation
based approach for image super-resolution, offering improved Following segmentation, the system performs feature
accuracy and speed, with applications in medical imaging, extraction, a crucial step where key characteristics of the beans,
surveillance, and photo recovery. However, challenges remain like color, texture, size, and shape, are extracted. These features
in fully reconstructing details from low-resolution inputs while play a significant role in detecting defects, as defective beans,
preserving main features. Chudasama et al. [18] introduced such as blackened or broken ones, have distinct physical
ISRGAN, a GAN-based method combining ISRNet for high markers. For example, black beans may appear darker due to
PSNR/SSIM and ISRGAN for enhanced perceptual quality. It over-roasting, while broken beans might have irregular shapes
produces natural-looking super-resolution images, or cracks. By converting the image data into numerical
outperforming state-of-the-art techniques, though further features, the system can begin to differentiate between beans
improvements in perceptual quality are possible. based on these characteristics, laying the foundation for later

3. METHODOLOGY
This study introduces SR-GAN, a deep learning-based image
super-resolution model using Generative Adversarial Networks
(GANs) to enhance low-resolution images. The model
comprises a generator and discriminator, with residual blocks
enabling the learning of complex mappings for significant
upscaling while preserving fine details. The generator focuses
on creating high-resolution images, while the discriminator
improves perceptual quality by distinguishing real from
generated images and providing feedback through adversarial
training. Trained on a curated coffee beans dataset, SR-GAN
optimizes loss functions to balance perceptual quality and
fidelity. Evaluation using metrics like PSNR and SSIM, along
with qualitative assessments, showed that SR-GAN
outperforms state-of-the-art methods in recovering textures and
producing visually appealing high-resolution images across classification steps.
diverse datasets. Figure 2. Coffee beans defects types a. Black b. Brown c.
Broken d. Non defective high-quality beans [19]

Figure 2 shows the four distinct types of coffee beans,


categorized by their defects. In panel (a), the beans are dark and
over-roasted, identified as defective black beans. Panel (b)
features brown beans, which may have some surface
irregularities but are less damaged than black beans. Panel (c)
shows broken beans, which display physical damage such as
cracks or fragments. Finally, panel (d) illustrates non-defective,
high-quality beans, characterized by their even coloration and
intact surface [19]. This classification highlights the various
defect types found in coffee beans, relevant for quality control
Figure 1. An overview of the methodology used in this study in coffee production.
The methodology illustrated in Figure 1 provides a structured 3.3 Defect Detection in Coffee Beans Using SR-GAN and
approach for classifying the quality of coffee beans using deep CNN Classification

979-8-3315-3038-9/25/$31.00 ©2025 IEEE


The proposed system leverages Super-Resolution Generative model, while CUDA and cuDNN libraries optimized GPU
Adversarial Network (SR-GAN) to improve the resolution and acceleration. The training environment ran on a Mac OS,
clarity of low-resolution images, allowing for detailed analysis ensuring stability, and Python was used for coding flexibility.
of subtle defects such as cracks, surface irregularities, and Hyperparameter tuning and model optimization were supported
discolorations. By using SR-GAN, even the smallest defects by specialized libraries, while Matplotlib and NumPy
are captured, significantly improving the overall detection facilitated result visualization and numerical analysis. This
accuracy. The enhanced image data is then processed by two setup ensured efficient training, fine-tuning, and performance
Convolutional Neural Networks (CNNs). The first CNN evaluation of the model.
identifies broad defect categories, such as black and brown
beans, where black beans are typically considered defective Table I presents the results of models trained on low-resolution
due to over-roasting or other damages. The second CNN further images for different types of beans (Black bean, Brown bean,
distinguishes broken beans, identifying physical damages such Broken bean, and High-quality bean) across various epochs
as cracks or fragments and separating them from high-quality (10, 15, 20, 25, and 30), with metrics including Precision,
beans. SRGAN improves the system’s ability to capture Recall, and F1 Score.
complex textures and produce realistic, high-resolution images. The model's performance improves as the number of epochs
This layered approach ensures more accurate defect increases, reaching the highest test accuracy of 82.42% at 30
identification and classification, addressing limitations of epochs. For the Black bean, the best performance was achieved
previous methods that struggled with subtle defects in low- at 30 epochs with a Precision of 60.31, Recall of 61.15, and F1
resolution images. The system’s performance is evaluated Score of 60.73, while the Brown and Broken beans also showed
through various metrics, and results demonstrate that SR-GAN peak performance at 30 epochs. The High-quality bean
significantly outperforms existing models in terms of consistently displayed the best results, with a Precision of
resolution enhancement and classification accuracy. In 91.23, Recall of 92.25, and F1 Score of 92.01 at 30 epochs. On
summary, this integrated approach combining SR-GAN and average, the model improves across all bean types as the epochs
CNN classification offers a robust solution for accurate coffee increase, with the High-quality bean showing the best overall
bean defect detection, enabling better quality control in coffee performance throughout the training process.
production processes. Figure 3 shows the Generator and
discriminator architecture of SRGAN inspired by [20]. Figure 6 illustrates the performance of models trained on low-
resolution images of black beans across different epochs (10,
15, 20, 25, 30, and an average). It presents four key metrics:
Accuracy (purple), Precision (green), Recall (blue), and F1
Score (yellow). As the number of epochs increases, the model’s
accuracy shows consistent improvement, peaking at 30 epochs.
Similarly, Precision, Recall, and F1 Score exhibit steady
growth, particularly between 15 and 30 epochs, with a notable
rise at 30 epochs. Precision appears to increase the most from
earlier epochs to later ones, while Recall shows a significant
Figure 3. The architecture of SRGAN is inspired by [20] jump between 20 and 25 epochs. The average values reflect
overall consistent performance across all metrics, with the
model delivering its best results at
higher epochs. This figure emphasizes the impact of additional
training epochs on enhancing model accuracy and performance
for black bean classification.

Figure 4 and Figure 5 illustrate a comparison between a low-


resolution image and a high-resolution output generated by the
SRGAN model of coffee beans. The low-resolution image
shows a cluster of coffee beans with relatively blurred details,
making it harder to identify subtle textures and defects. In
contrast, the SRGAN-enhanced high-resolution image reveals
finer details, such as the clear texture of the beans’ surface and
Figure 6. Results for models trained on low-resolution images
cracks, significantly improving the visual quality. This
of black beans
comparison highlights the effectiveness of SRGAN in
enhancing image resolution, which is crucial for detecting
defects and ensuring better quality control.

4. RESULTS AND DISCUSSION


The SR-GAN model was implemented using a high-
performance computing system with NVIDIA GPUs, such as
the Tesla V100, to handle the intensive computations required
for deep learning. The training process leveraged GPU parallel
processing to efficiently manage the large-scale coffee bean
dataset. Popular deep learning frameworks like TensorFlow or Figure 7. Results for models trained on low-resolution images
PyTorch were used for building, training, and evaluating the of brown bean
979-8-3315-3038-9/25/$31.00 ©2025 IEEE
Epochs Test Black bean Brown bean Broken bean High quality bean
Acc
Precisi Recall F1 Precisi Recall F1 Precisi Recall F1 Precisi Recall F1
on Score on Score on Score on Score
10 69.22 30.14 40.02 35.08 45.14 30.32 37.73 55.86 48.96 52.41 82.39 79.66 81.02
15 72.29 49.19 39.11 44.15 50.19 39.89 45.04 59.54 52.78 56.16 84.36 82.57 83.46
20 75.33 50.23 42.18 46.20 54.23 44.67 49.45 61.02 60.89 60.95 86.24 85.14 85.69
25 77.38 59.28 57.23 58.25 62.28 54.63 58.45 64.99 66.61 65.8 89.23 88.39 88.81
30 82.42 60.31 61.15 60.73 66.82 67.15 66.98 71.13 69.99 70.56 91.23 92.25 92.01
Avg 75.32 49.83 47.93 48.88 55.73 47.33 51.53 62.50 59.84 61.17 86.69 85.60 86.19

Table I. Results for models trained on low-resolution images Table II presents the results of models trained on high-
resolution images for different types of beans (Black bean,
Figure 7 and 8 displays the performance metrics of models Brown bean, Broken bean, and High-quality bean) across
trained on low-resolution images of brown beans and broken various epochs (10, 15, 20, 25, 30). The metrics used to
beans across various epochs (10, 15, 20, 25, 30, and the evaluate the model’s performance include Precision, Recall,
average). It shows four key metrics: Accuracy (purple), and F1 Score for each bean type. As the number of epochs
Precision (green), Recall (blue), and F1 Score (yellow). increases, test accuracy improves, peaking at 97.23% at 30
Accuracy starts around 60% at 10 epochs, improves steadily, epochs. The model performs particularly well on high-quality
and peaks at 30 epochs, similar to Precision and F1 Score. beans, reaching a Precision of 95.89, Recall of 96.99, and an
Precision shows a gradual improvement with the highest value F1 Score of 96.44 at 30 epochs, while the Black bean also
at 30 epochs, while Recall also improves, particularly between shows significant improvement, with an F1 Score of 75.8 at 30
20 and 30 epochs, before showing a slight dip. The F1 Score epochs. The Brown and Broken beans similarly show their
follows a similar trend to Precision, increasing as more epochs highest performance at 30 epochs, with F1 Scores of 78.64 and
are added. Overall, the average metrics indicate consistent 79.32, respectively. On average, the model performs best for
model performance, with 30 epochs delivering the best results high-quality beans, with consistently high metrics across all
across all parameters for the brown bean classification task, epochs, and shows overall improvements as the number of
highlighting the model’s increased accuracy and reliability epochs increases, demonstrating the effectiveness of training
with more training. on high-resolution images for bean classification tasks.

Figure 8. Results for models trained on low-resolution images Figure 9. Results for models trained on low-resolution images
of broken bean of high-quality beans

Figure 9 showcases the performance results of models trained Figure 10 illustrates the performance of models trained on
on low-resolution images of high-quality beans across different high-resolution images of black beans across different epochs
epochs (10, 15, 20, 25, 30, and the average). Four key metrics (10, 15, 20, 25, 30, and an average). Four key metrics are
are depicted: Accuracy (purple), Precision (green), Recall depicted: Accuracy (black), Precision (orange), Recall (gray),
(blue), and F1 Score (yellow). Across all epochs, the model and F1 Score (blue). As the number of epochs increases,
achieves consistently high performance, with Accuracy, accuracy shows a steady improvement, peaking at 30 epochs.
Precision, Recall, and F1 Score all ranging above 75%. The Precision, Recall, and F1 Score also increase with additional
metrics remain fairly stable, with minimal fluctuations, as the epochs, with a significant jump in all metrics around 20 epochs.
number of epochs increases. At 30 epochs, Accuracy and At 30 epochs, the model reaches its highest performance across
Recall peak near 90%, while Precision and F1 Score also show all metrics, with accuracy nearing 100%. The average values
strong performance, reaching similarly high values. The indicate consistent overall performance, with Precision and F1
average values across all metrics indicate that the model is Score closely aligned throughout. This figure highlights the
highly effective in classifying high-quality beans, achieving importance of longer training (30 epochs) in achieving optimal
robust and reliable results with only slight variations between results for black bean classification using high-resolution
the metrics, highlighting its stability and precision over images.
different epochs.

979-8-3315-3038-9/25/$31.00 ©2025 IEEE


Epochs Test Black bean Brown bean Broken bean High quality bean
Acc
Precis Recall F1 Precis Recall F1 Precis Recall F1 Precis Recall F1
ion Score ion Score ion Score ion Score
10 65.27
72.78 62.79 67.76 65.12 72.78 68.95 69.99 74.12 72.05 86.56 89.45 88.01
5
15 77.29 71.34 74.22 72.78 75.67 76.15 75.91 72.43 75.02 73.72 87.89 89.88 88.88
20 85.89 79.89 80.23 80.06 81.56 83.66 82.61 83.77 80.98 82.37 89.67 91.67 90.67
25 94.36 84.75 85.42 85.08 86.45 87.77 87.11 88.32 89.95 89.13 94.78 95.89 95.33
30 97.23 74.69 76.90 75.8 77.2 80.09 78.64 78.62 80.01 79.32 95.89 96.99 96.44
Avg 85.51 62.79 67.76 65.27 65.12 72.78 68.95 69.99 74.12 72.05 90.95 92.77 91.86

Table II. Results for models trained on high-resolution


images.

Figure 12. Results for models trained on high-resolution


images of broken beans

Figure 10. Results for models trained on high-resolution Figure 13 illustrates the performance of models trained on
images of black beans high-resolution images of high-quality beans across different
epochs, measuring metrics such as Accuracy, Precision, Recall,
and F1 Score. The model demonstrates consistently high
performance across all metrics, with values remaining
relatively stable as the number of training epochs increases.
Accuracy, Precision, Recall, and F1 Score all hover around
high values, indicating that the model effectively identifies
high-quality beans with minimal variation in performance. The
average performance across all epochs remains strong, showing
that the model performs well in identifying high-quality coffee
beans throughout the training process.

Figure 11. Results for models trained on high-resolution


images of brown beans

Figure 11 shows the results for models trained on brown beans,


where accuracy and precision steadily improve with more
epochs, while recall and F1 score also increase consistently.
Similarly, Figure 12 displays the performance of models
trained on broken beans, with all metrics showing gradual
improvements as training progresses. Overall, both graphs
highlight the models' enhanced performance with increased
training, demonstrating balanced and improved accuracy Figure 13. Results for models trained on high-resolution
across the metrics. images of high-quality beans

979-8-3315-3038-9/25/$31.00 ©2025 IEEE


5. CONCLUSION AND FUTURE SCOPE Computer Vision and Pattern Recognition (pp. 2472-
2481). https://doi.org/10.1109/CVPR.2018.00262
This paper presents an innovative approach to image super- 8. Blau, Y., & Michaeli, T. (2018). The perception-
resolution using the SR-GAN model, which significantly distortion trade-off. In Proceedings of the IEEE
improves the resolution and quality of coffee bean images for Conference on Computer Vision and Pattern Recognition
accurate defect detection. By combining the strengths of (pp. 6228-6237).
Generative Adversarial Networks (GANs) and Convolutional https://doi.org/10.1109/CVPR.2018.00653
Neural Networks (CNNs), the model is able to capture subtle 9. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E.
defects such as cracks, discoloration, and surface irregularities, P. (2004). Image quality assessment: from error visibility
often missed by traditional methods. The proposed SR-GAN to structural similarity. IEEE Transactions on Image
and CNN-based system demonstrates superior performance in Processing, 13(4), 600-612.
resolution enhancement and defect classification, offering a https://doi.org/10.1109/TIP.2003.819861
robust and scalable solution for enhancing quality control in 10. Zhao, H., Gallo, O., Frosio, I., & Kautz, J. (2016). Loss
coffee production. Its success in this domain suggests that the functions for image restoration with neural networks.
approach can be extended to other industries requiring high- IEEE Transactions on Computational Imaging, 3(1), 47-
resolution image analysis for quality assurance. In the future, 57. https://doi.org/10.1109/TCI.2016.2613613
there is potential to further optimize the SR-GAN model by 11. Ledig, C., Theis, L., Huszár, F., Caballero, J.,
refining its architecture and tuning hyperparameters to enhance Cunningham, A., Acosta, A., ... & Shi, W. (2017). Photo-
its accuracy and computational efficiency. Additionally, the realistic single image super-resolution using a generative
approach could be tested on other agricultural products and adversarial network. Proceedings of the IEEE conference
industrial processes where detecting minute defects is critical. on computer vision and pattern recognition, 4681-4690.
Incorporating other advanced machine learning techniques, 12. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., ... &
such as transfer learning or multi-scale analysis, could further Tang, X. (2018). Esrgan: Enhanced super-resolution
boost the model’s performance. Finally, expanding the system generative adversarial networks. Proceedings of the
to handle real-time defect detection could open doors to fully European Conference on Computer Vision (ECCV)
automated quality control systems in various industries. Workshops, 0.
13. Zhang, H., Goodfellow, I., Metaxas, D., & Odena, A.
REFERENCES (2019). Self-attention generative adversarial networks.
1. Park, S. C., Park, M. K., & Kang, M. G. (2003). International Conference on Machine Learning, 7354-
Super-resolution image reconstruction: a technical 7363.
overview. IEEE Signal Processing Magazine, 20(3), 21- 14. Lin, G., Wu, Q., Chen, L., Qiu, L., Wang, X., Liu, T., &
36. https://doi.org/10.1109/MSP.2003.1203207 Chen, X. (2018). Deep unsupervised learning for image
super-resolution with generative adversarial
2. Glasner, D., Bagon, S., & Irani, M. (2009). Super- network. Signal Processing: Image Communication, 68,
resolution from a single image. In 2009 IEEE 12th 88-100.
International Conference on Computer Vision (pp. 349- 15. Le-Tien, T., Nguyen-Thanh, T., Xuan, H. P., Nguyen-
356). IEEE. https://doi.org/10.1109/ICCV.2009.5459271 Truong, G., & Ta-Quoc, V. (2020). Deep learning-based
approach implemented to image super-resolution. J.
3. Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image 16. Chauhan, K., Patel, S. N., Kumhar, M., Bhatia, J.,
super-resolution using deep convolutional networks. Tanwar, S., Davidson, I. E., ... & Sharma, R. (2023). Deep
IEEE Transactions on Pattern Analysis and Machine learning-based single-image super-resolution: A
Intelligence, 38(2), 295-307. comprehensive review. IEEE Access, 11, 21811-21830.
https://doi.org/10.1109/TPAMI.2015.2439281 17. Parekh, D., Maiti, A., & Jain, V. (2022, April). Image
4. Ledig, C., Theis, L., Huszár, F., Caballero, J., Aitken, A. Super-Resolution using GAN-A study. In 2022 6th
P., Tejani, A., ... & Shi, W. (2017). Photo-realistic single International Conference on Trends in Electronics and
image super-resolution using a generative adversarial Informatics (ICOEI) (pp. 1539-1549). IEEE.
network. In Proceedings of the IEEE Conference on 18. Chudasama, V., & Upla, K. (2020). ISRGAN: Improved
Computer Vision and Pattern Recognition (pp. 4681- super-resolution using generative adversarial networks.
4690). https://doi.org/10.1109/CVPR.2017.19 In Advances in Computer Vision: Proceedings of the
2019 Computer Vision Conference (CVC), Volume 1
5. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., ... & 1 (pp. 109-127). Springer International Publishing.
Change Loy, C. (2018). ESRGAN: Enhanced super- 19. Franca, A. S., & Oliveira, L. S. (2008). Chemistry of
resolution generative adversarial networks. In defective coffee beans. Food chemistry research
Proceedings of the European Conference on Computer developments, 4(1), 105-138.
Vision (ECCV) Workshops (pp. 0-0). 20. Kim, J., & Choe, Y. (2023). Document Image Restore via
https://doi.org/10.1007/978-3-030-11021-5_11 SPADE-Based Super Resolution Network Electronics,
6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
12(3),748.
Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014).
Generative adversarial nets. In Advances in Neural
Information Processing Systems (pp. 2672-2680).
https://doi.org/10.48550/arXiv.1406.2661
7. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., & Fu, Y.
(2018). Residual dense network for image super-
resolution. In Proceedings of the IEEE Conference on
979-8-3315-3038-9/25/$31.00 ©2025 IEEE

You might also like