Analysis of Varroa Mite Colony Infestation Level Using New Open Software Based on Deep Learning Techniques
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Dataset Description
3.2. Metrics
3.3. Workflow for Varroa Mite Detection
- Divide each input image (for training and inference) into smaller sections (or tiles). This allows us mitigation of the noisy feature problems and the information loss explained previously. Each image will be processed independently within the neural network.
- Train a deep learning model based on Faster R-CNN architecture to perform the Varroa mite detection.
- Finally, after inference, perform an automatic refinement of the predicted bounding boxes. As tiles have also been used during the inference, problems may occur at the unions of two (or more) tiles when combining all the bounding boxes to obtain the final output for the entire image. For instance, if a Varroa mite was precisely placed at the union of several tiles, it may have been detected two or more times (once in each tile; see Figure 4). This step checks, for each tile, if there is a bounding box in any of the edges; if so, a new tile centered on that bounding box is built and the neuronal network performs a new prediction. This new prediction is compared to the initial one and, if the area of the intersection of both bounding boxes is greater than or equal to half of the area of the prediction in the initial tile, the new prediction is chosen instead of the initial one. When several similar predictions are obtained for different tiles, only the first one is added. For instance, in Figure 5, two different new crops (in green) are obtained for the two pieces of Varroa mite detected in the initial tiles. The new bounding boxes (in blue) are very similar and only the first one is considered in the final output of the prediction.
3.4. Neuronal Network Training
3.5. Hardware and Software
4. Results and Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Le Conte, Y.; Ellis, M.; Ritter, W. Varroa mites and honey bee health: Can Varroa explain part of the colony losses? Apidologie 2010, 41, 353–363. [Google Scholar] [CrossRef]
- Genersch, E.; von der Ohe, W.; Kaatz, H.; Schroeder, A.; Otten, C.; Büchler, R.; Berg, S.; Ritter, W.; Mühlen, W.; Gisder, S.; et al. The German bee monitoring project: A long term study to understand periodically high winter losses of honey bee colonies. Apidologie 2010, 41, 332–352. [Google Scholar] [CrossRef]
- Ramsey, S.D.; Ochoa, R.; Bauchan, G.; Gulbronson, C.; Mowery, J.D.; Cohen, A.; Lim, D.; Joklik, J.; Cicero, J.M.; Ellis, J.D.; et al. Varroa destructor feeds primarily on honey bee fat body tissue and not hemolymph. Proc. Natl. Acad. Sci. USA 2019, 116, 1792–1801. [Google Scholar] [CrossRef]
- DeGrandi-Hoffman, G.; Curry, R. A mathematical model of Varroa mite (Varroa destructor Anderson and Trueman) and honeybee (Apis mellifera L.) population dynamics. Int. J. Acarol. 2004, 30, 259–274. [Google Scholar] [CrossRef]
- Murilhas, A.M. Varroa destructor infestation impact on Apis mellifera carnica capped worker brood production, bee population and honey storage in a Mediterranean climate. Apidologie 2002, 33, 271–281. [Google Scholar] [CrossRef]
- Posada-Florez, F.; Ryabov, E.V.; Heerman, M.C.; Chen, Y.; Evans, J.D.; Sonenshine, D.E.; Cook, S.C. Varroa destructor mites vector and transmit pathogenic honey bee viruses acquired from an artificial diet. PLoS ONE 2020, 15, e0242688. [Google Scholar] [CrossRef]
- Vilarem, C.; Piou, V.; Vogelweith, F.; Vétillard, A. Varroa destructor from the Laboratory to the Field: Control, Biocontrol and IPM Perspectives—A Review. Insects 2021, 12, 800. [Google Scholar] [CrossRef]
- Dainat, B.; Evans, J.D.; Chen, Y.P.; Gauthier, L.; Neumann, P. Dead or Alive: Deformed Wing Virus and Varroa destructor Reduce the Life Span of Winter Honeybees. Appl. Environ. Microbiol. 2012, 78, 981–987. [Google Scholar] [CrossRef] [PubMed]
- Food and Agriculture Organization of the United Nations. Why Bees Matter. The Importance of Bees and Other Pollinators for Food and Agriculture; Technical Report; 2018. Available online: https://www.gov.si/assets/ministrstva/MKGP/PROJEKTI/SDC_WBD/TOOLKIT/General-Information/FAO_brosura_ENG_print.pdf (accessed on 18 March 2024).
- Rosenkranz, P.; Aumeier, P.; Ziegelmann, B. Biology and control of Varroa destructor. J. Invertebr. Pathol. 2010, 103, S96–S119. [Google Scholar] [CrossRef]
- Dietemann, V.; Pflugfelder, J.; Anderson, D.; Charrière, J.D.; Chejanovsky, N.; Dainat, B.; de Miranda, J.; Delaplane, K.; Dillier, F.X.; Fuch, S.; et al. Varroa destructor: Research avenues towards sustainable control. J. Apic. Res. 2012, 51, 125–132. [Google Scholar] [CrossRef]
- Oberreiter, H.; Brodschneider, R. Austrian COLOSS Survey of Honey Bee Colony Winter Losses 2018/19 and Analysis of Hive Management Practices. Diversity 2020, 12, 99. [Google Scholar] [CrossRef]
- Mancuso, T.; Croce, L.; Vercelli, M. Total Brood Removal and Other Biotechniques for the Sustainable Control of Varroa Mites in Honey Bee Colonies: Economic Impact in Beekeeping Farm Case Studies in Northwestern Italy. Sustainability 2020, 12, 2302. [Google Scholar] [CrossRef]
- Büchler, R.; Uzunov, A.; Kovačić, M.; Prešern, J.; Pietropaoli, M.; Hatjina, F.; Pavlov, B.; Charistos, L.; Formato, G.; Galarza, E.; et al. Summer brood interruption as integrated management strategy for effective Varroa control in Europe. J. Apic. Res. 2020, 59, 764–773. [Google Scholar] [CrossRef]
- Ostiguy, N.; Sammataro, D. A simplified technique for counting Varroa jacobsoni Oud. on sticky boards. Apidologie 2000, 31, 707–716. [Google Scholar] [CrossRef]
- Calderone, N.W.; Lin, S. Rapid determination of the numbers of Varroa destructor, a parasitic mite of the honey bee, Apis mellifera, on sticky-board collection devices. Apidologie 2003, 34, 11–17. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 10781–10790. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–15 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [PubMed]
- Divasón, J.; Martinez-de Pison, F.J.; Romero, A.; Santolaria, P.; Yániz, J.L. Varroa Mite Detection Using Deep Learning Techniques. In Hybrid Artificial Intelligent Systems; García Bringas, P., Pérez García, H., Martínez de Pisón, F.J., Martínez Álvarez, F., Troncoso Lora, A., Herrero, Á., Calvo Rolle, J.L., Quintián, H., Corchado, E., Eds.; Springer: Cham, Switzerland, 2023; pp. 326–337. [Google Scholar]
- McAllister, E.; Payo, A.; Novellino, A.; Dolphin, T.; Medina-Lopez, E. Multispectral satellite imagery and machine learning for the extraction of shoreline indicators. Coast. Eng. 2022, 174, 104102. [Google Scholar] [CrossRef]
- Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-sign detection and classification in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2110–2118. [Google Scholar]
- Gupta, H.; Verma, O.P. Monitoring and surveillance of urban road traffic using low altitude drone images: A deep learning approach. Multimed. Tools Appl. 2022, 81, 19683–19703. [Google Scholar] [CrossRef]
- Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and tracking meet drones challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 7380–7399. [Google Scholar] [CrossRef]
- Huang, H.; Tang, X.; Wen, F.; Jin, X. Small object detection method with shallow feature fusion network for chip surface defect detection. Sci. Rep. 2022, 12, 3914. [Google Scholar] [CrossRef]
- Yu, X.; Gong, Y.; Jiang, N.; Ye, Q.; Han, Z. Scale match for tiny person detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 14–19 June 2020; pp. 1257–1265. [Google Scholar]
- Cheng, G.; Yuan, X.; Yao, X.; Yan, K.; Zeng, Q.; Han, J. Towards large-scale small object detection: Survey and benchmarks. arXiv 2022, arXiv:2207.14096. [Google Scholar] [CrossRef] [PubMed]
- Chen, G.; Wang, H.; Chen, K.; Li, Z.; Song, Z.; Liu, Y.; Chen, W.; Knoll, A. A Survey of the Four Pillars for Small Object Detection: Multiscale Representation, Contextual Information, Super-Resolution, and Region Proposal. IEEE Trans. Syst. Man, Cybern. Syst. 2022, 52, 936–953. [Google Scholar] [CrossRef]
- Ozge Unel, F.; Ozkalayci, B.O.; Cigla, C. The power of tiling for small object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 582–591. [Google Scholar]
- Kulyukin, V.; Mukherjee, S. On video analysis of omnidirectional bee traffic: Counting bee motions with motion detection and image classification. Appl. Sci. 2019, 9, 3743. [Google Scholar] [CrossRef]
- Arias-Calluari, K.; Colin, T.; Latty, T.; Myerscough, M.; Altmann, E.G. Modelling daily weight variation in honey bee hives. PLoS Comput. Biol. 2023, 19, e1010880. [Google Scholar] [CrossRef]
- Alves, T.S.; Pinto, M.A.; Ventura, P.; Neves, C.J.; Biron, D.G.; Junior, A.C.; De Paula Filho, P.L.; Rodrigues, P.J. Automatic detection and classification of honey bee comb cells using deep learning. Comput. Electron. Agric. 2020, 170, 105244. [Google Scholar] [CrossRef]
- Rodriguez, I.F.; Megret, R.; Acuna, E.; Agosto-Rivera, J.L.; Giray, T. Recognition of pollen-bearing bees from video using convolutional neural network. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 314–322. [Google Scholar]
- Ngo, T.N.; Rustia, D.J.A.; Yang, E.C.; Lin, T.T. Automated monitoring and analyses of honey bee pollen foraging behavior using a deep learning-based imaging system. Comput. Electron. Agric. 2021, 187, 106239. [Google Scholar] [CrossRef]
- Bilik, S.; Bostik, O.; Kratochvila, L.; Ligocki, A.; Poncak, M.; Zemcik, T.; Richter, M.; Janakova, I.; Honec, P.; Horak, K. Machine Learning and Computer Vision Techniques in Bee Monitoring Applications. arXiv 2022, arXiv:2208.00085. [Google Scholar]
- Bjerge, K.; Frigaard, C.E.; Mikkelsen, P.H.; Nielsen, T.H.; Misbih, M.; Kryger, P. A computer vision system to monitor the infestation level of Varroa destructor in a honeybee colony. Comput. Electron. Agric. 2019, 164, 104898. [Google Scholar] [CrossRef]
- Liu, M.; Cui, M.; Xu, B.; Liu, Z.; Li, Z.; Chu, Z.; Zhang, X.; Liu, G.; Xu, X.; Yan, Y. Detection of Varroa destructor Infestation of Honeybees Based on Segmentation and Object Detection Convolutional Neural Networks. AgriEngineering 2023, 5, 1644–1662. [Google Scholar] [CrossRef]
- Voudiotis, G.; Moraiti, A.; Kontogiannis, S. Deep Learning Beehive Monitoring System for Early Detection of the Varroa Mite. Signals 2022, 3, 506–523. [Google Scholar] [CrossRef]
- Bilik, S.; Kratochvila, L.; Ligocki, A.; Bostik, O.; Zemcik, T.; Hybl, M.; Horak, K.; Zalud, L. Visual Diagnosis of the Varroa Destructor Parasitic Mite in Honeybees Using Object Detector Techniques. Sensors 2021, 21, 2764. [Google Scholar] [CrossRef] [PubMed]
- Stefan, S.; Kampel, M. Varroa Dataset. 2020. Available online: https://zenodo.org/record/4085044 (accessed on 18 March 2024).
- Bugnon, A.; Viñals, R.; Abbet, C.; Chevassus, G.; Bohnenblust, M.; Rad, M.S.; Rieder, S.; Droz, B.; Charrière, J.D.; Thiran, J.P. Apiculture—Une application pour lutter contre le varroa. Recherche Agronomique Suisse 2021, 12, 102–108. [Google Scholar] [CrossRef]
- Picek, L.; Novozamsky, A.; Frydrychova, R.C.; Zitova, B.; Mach, P. Monitoring of Varroa Infestation Rate in Beehives: A Simple AI Approach. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3341–3345. [Google Scholar] [CrossRef]
- Padilla, R.; Netto, S.L.; da Silva, E.A.B. A Survey on Performance Metrics for Object-Detection Algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020; pp. 237–242. [Google Scholar] [CrossRef]
- Padilla, R.; Passos, W.L.; Dias, T.L.B.; Netto, S.L.; da Silva, E.A.B. A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit. Electronics 2021, 10, 279. [Google Scholar] [CrossRef]
- Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 18–23 June 2018; pp. 8183–8192. [Google Scholar] [CrossRef]
- Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8878–8887. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Li, Y.; Mao, H.; Girshick, R.; He, K. Exploring Plain Vision Transformer Backbones For Object Detection. In Proceedings of the Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Proceedings, Part IX. pp. 280–296. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Radosavovic, I.; Kosaraju, R.P.; Girshick, R.; He, K.; Dollár, P. Designing Network Design Spaces. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10425–10433. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Li, Y.; Wei, C.; Ma, T. Towards explaining the regularization effect of initial large learning rate in training neural networks. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Gong, Y.; Yu, X.; Ding, Y.; Peng, X.; Zhao, J.; Han, Z. Effective fusion factor in FPN for tiny object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 1160–1168. [Google Scholar]
- Deng, C.; Wang, M.; Liu, L.; Liu, Y.; Jiang, Y. Extended feature pyramid network for small object detection. IEEE Trans. Multimed. 2021, 24, 1968–1979. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Guan, X.; Liang, B.; Lai, Y.; Luo, X. Research on overfitting of deep learning. In Proceedings of the 2019 15th International Conference on Computational Intelligence and Security (CIS), Macao, China, 13–16 December 2019; pp. 78–81. [Google Scholar]
- Reina, G.A.; Panchumarthy, R.; Thakur, S.P.; Bastidas, A.; Bakas, S. Systematic evaluation of image tiling adverse effects on deep learning semantic segmentation. Front. Neurosci. 2020, 14, 65. [Google Scholar] [CrossRef] [PubMed]
- Bates, K.; Le, K.N.; Lu, H. Deep learning for robust and flexible tracking in behavioral studies for C. elegans. PLoS Comput. Biol. 2022, 18, e1009942. [Google Scholar] [CrossRef]
- Geldenhuys, D.S.; Josias, S.; Brink, W.; Makhubele, M.; Hui, C.; Landi, P.; Bingham, J.; Hargrove, J.; Hazelbag, M.C. Deep learning approaches to landmark detection in tsetse wing images. PLoS Comput. Biol. 2023, 19, e1011194. [Google Scholar] [CrossRef]
- Zoph, B.; Cubuk, E.D.; Ghiasi, G.; Lin, T.Y.; Shlens, J.; Le, Q.V. Learning data augmentation strategies for object detection. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXVII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 566–583. [Google Scholar]
- Kisantal, M.; Wojna, Z.; Murawski, J.; Naruniec, J.; Cho, K. Augmentation for small object detection. arXiv 2019, arXiv:1902.07296. [Google Scholar]
- Chen, C.; Zhang, Y.; Lv, Q.; Wei, S.; Wang, X.; Sun, X.; Dong, J. RRNet: A hybrid detector for object detection in drone-captured images. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; pp. 100–108. [Google Scholar]
Backbone | Confidence Threshold | mAP | mAR | Epochs |
---|---|---|---|---|
resnet18 | 0.90 | 0.662 | 0.714 | 210 |
resnet18 | 0.50 | 0.719 | 0.814 | 180 |
resnet34 | 0.90 | 0.727 | 0.788 | 166 |
resnet34 | 0.50 | 0.736 | 0.810 | 180 |
resnet50 | 0.90 | 0.657 | 0.699 | 294 |
resnet50 | 0.50 | 0.693 | 0.799 | 154 |
resnet101 | 0.90 | 0.710 | 0.803 | 505 |
resnet101 | 0.50 | 0.663 | 0.807 | 322 |
resnet152 | 0.90 | 0.718 | 0.784 | 425 |
resnet152 | 0.50 | 0.725 | 0.822 | 431 |
vitdet | 0.90 | 0.579 | 0.747 | 204 |
vitdet | 0.50 | 0.579 | 0.762 | 226 |
efficientnetb0 | 0.90 | 0.692 | 0.777 | 496 |
efficientnetb0 | 0.50 | 0.735 | 0.836 | 576 |
efficientnetb1 | 0.90 | 0.532 | 0.658 | 443 |
efficientnetb1 | 0.50 | 0.697 | 0.792 | 432 |
efficientnetb2 | 0.90 | 0.623 | 0.691 | 382 |
efficientnetb2 | 0.50 | 0.731 | 0.844 | 364 |
efficientnetb3 | 0.90 | 0.551 | 0.587 | 389 |
efficientnetb3 | 0.50 | 0.744 | 0.844 | 500 |
efficientnetb4 | 0.90 | 0.548 | 0.654 | 437 |
efficientnetb4 | 0.50 | 0.647 | 0.799 | 481 |
efficientnetb5 | 0.90 | 0.655 | 0.773 | 392 |
efficientnetb5 | 0.50 | 0.453 | 0.498 | 398 |
efficientnetb6 | 0.90 | 0.476 | 0.517 | 408 |
efficientnetb6 | 0.50 | 0.708 | 0.799 | 412 |
efficientnetb7 | 0.90 | 0.486 | 0.520 | 375 |
efficientnetb7 | 0.50 | 0.641 | 0.744 | 365 |
resnext101_32x8d | 0.90 | 0.677 | 0.796 | 437 |
resnext101_32x8d | 0.50 | 0.683 | 0.788 | 163 |
resnext_101_32x8d_fpn | 0.50 | 0.769 | 0.844 | 148 |
resnext_101_32x8d_fpn | 0.90 | 0.774 | 0.833 | 165 |
regnet_y_400mf | 0.90 | 0.633 | 0.714 | 294 |
regnet_y_400mf | 0.50 | 0.703 | 0.784 | 208 |
resnet18_fpn | 0.90 | 0.751 | 0.833 | 162 |
resnet18_fpn | 0.50 | 0.780 | 0.848 | 162 |
resnet34_fpn | 0.90 | 0.711 | 0.807 | 83 |
resnet34_fpn | 0.50 | 0.648 | 0.814 | 97 |
resnet50_fpn | 0.90 | 0.754 | 0.825 | 191 |
resnet50_fpn | 0.50 | 0.777 | 0.844 | 158 |
resnet101_fpn | 0.90 | 0.746 | 0.836 | 180 |
resnet101_fpn | 0.50 | 0.754 | 0.844 | 145 |
resnet152_fpn | 0.90 | 0.776 | 0.848 | 153 |
resnet152_fpn | 0.50 | 0.779 | 0.851 | 201 |
Backbone | Confidence Threshold | Refinement Step | deblurGAN | mAP | mAR |
---|---|---|---|---|---|
resnet18_fpn | 0.50 | No | No | 0.780 | 0.847 |
resnet18_fpn | 0.50 | Yes | No | 0.849 | 0.948 |
resnet18_fpn | 0.50 | No | Yes | 0.775 | 0.851 |
resnet18_fpn | 0.50 | Yes | Yes | 0.883 | 0.955 |
resnet50_fpn | 0.50 | No | No | 0.777 | 0.844 |
resnet50_fpn | 0.50 | Yes | No | 0.858 | 0.963 |
resnet50_fpn | 0.50 | No | Yes | 0.780 | 0.844 |
resnet50_fpn | 0.50 | Yes | Yes | 0.907 | 0.967 |
resnet152_fpn | 0.50 | No | No | 0.779 | 0.851 |
resnet152_fpn | 0.50 | Yes | No | 0.894 | 0.963 |
resnet152_fpn | 0.50 | No | Yes | 0.709 | 0.807 |
resnet152_fpn | 0.50 | Yes | Yes | 0.851 | 0.941 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Divasón, J.; Romero, A.; Martinez-de-Pison, F.J.; Casalongue, M.; Silvestre, M.A.; Santolaria, P.; Yániz, J.L. Analysis of Varroa Mite Colony Infestation Level Using New Open Software Based on Deep Learning Techniques. Sensors 2024, 24, 3828. https://doi.org/10.3390/s24123828
Divasón J, Romero A, Martinez-de-Pison FJ, Casalongue M, Silvestre MA, Santolaria P, Yániz JL. Analysis of Varroa Mite Colony Infestation Level Using New Open Software Based on Deep Learning Techniques. Sensors. 2024; 24(12):3828. https://doi.org/10.3390/s24123828
Chicago/Turabian StyleDivasón, Jose, Ana Romero, Francisco Javier Martinez-de-Pison, Matías Casalongue, Miguel A. Silvestre, Pilar Santolaria, and Jesús L. Yániz. 2024. "Analysis of Varroa Mite Colony Infestation Level Using New Open Software Based on Deep Learning Techniques" Sensors 24, no. 12: 3828. https://doi.org/10.3390/s24123828