Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = Detectron2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 5298 KiB  
Article
Development and Validation of an Artificial Intelligence Model for Detecting Rib Fractures on Chest Radiographs
by Kaehong Lee, Sunhee Lee, Ji Soo Kwak, Heechan Park, Hoonji Oh and Jae Chul Koh
J. Clin. Med. 2024, 13(13), 3850; https://doi.org/10.3390/jcm13133850 - 30 Jun 2024
Viewed by 869
Abstract
Background: Chest radiography is the standard method for detecting rib fractures. Our study aims to develop an artificial intelligence (AI) model that, with only a relatively small amount of training data, can identify rib fractures on chest radiographs and accurately mark their [...] Read more.
Background: Chest radiography is the standard method for detecting rib fractures. Our study aims to develop an artificial intelligence (AI) model that, with only a relatively small amount of training data, can identify rib fractures on chest radiographs and accurately mark their precise locations, thereby achieving a diagnostic accuracy comparable to that of medical professionals. Methods: For this retrospective study, we developed an AI model using 540 chest radiographs (270 normal and 270 with rib fractures) labeled for use with Detectron2 which incorporates a faster region-based convolutional neural network (R-CNN) enhanced with a feature pyramid network (FPN). The model’s ability to classify radiographs and detect rib fractures was assessed. Furthermore, we compared the model’s performance to that of 12 physicians, including six board-certified anesthesiologists and six residents, through an observer performance test. Results: Regarding the radiographic classification performance of the AI model, the sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were 0.87, 0.83, and 0.89, respectively. In terms of rib fracture detection performance, the sensitivity, false-positive rate, and free-response receiver operating characteristic (JAFROC) figure of merit (FOM) were 0.62, 0.3, and 0.76, respectively. The AI model showed no statistically significant difference in the observer performance test compared to 11 of 12 and 10 of 12 physicians, respectively. Conclusions: We developed an AI model trained on a limited dataset that demonstrated a rib fracture classification and detection performance comparable to that of an experienced physician. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

12 pages, 7183 KiB  
Article
RETRACTED: Utilizing Generative Adversarial Networks for Acne Dataset Generation in Dermatology
by Aravinthan Sankar, Kunal Chaturvedi, Al-Akhir Nayan, Mohammad Hesam Hesamian, Ali Braytee and Mukesh Prasad
BioMedInformatics 2024, 4(2), 1059-1070; https://doi.org/10.3390/biomedinformatics4020059 - 9 Apr 2024
Cited by 2 | Viewed by 1451 | Retraction
Abstract
Background: In recent years, computer-aided diagnosis for skin conditions has made significant strides, primarily driven by artificial intelligence (AI) solutions. However, despite this progress, the efficiency of AI-enabled systems remains hindered by the scarcity of high-quality and large-scale datasets, primarily due to privacy [...] Read more.
Background: In recent years, computer-aided diagnosis for skin conditions has made significant strides, primarily driven by artificial intelligence (AI) solutions. However, despite this progress, the efficiency of AI-enabled systems remains hindered by the scarcity of high-quality and large-scale datasets, primarily due to privacy concerns. Methods: This research circumvents privacy issues associated with real-world acne datasets by creating a synthetic dataset of human faces with varying acne severity levels (mild, moderate, and severe) using Generative Adversarial Networks (GANs). Further, three object detection models—YOLOv5, YOLOv8, and Detectron2—are used to evaluate the efficacy of the augmented dataset for detecting acne. Results: Integrating StyleGAN with these models, the results demonstrate the mean average precision (mAP) scores: YOLOv5: 73.5%, YOLOv8: 73.6%, and Detectron2: 37.7%. These scores surpass the mAP achieved without GANs. Conclusions: This study underscores the effectiveness of GANs in generating synthetic facial acne images and emphasizes the importance of utilizing GANs and convolutional neural network (CNN) models for accurate acne detection. Full article
(This article belongs to the Special Issue Feature Papers in Applied Biomedical Data Science)
Show Figures

Figure 1

14 pages, 7617 KiB  
Article
Revolutionizing Cow Welfare Monitoring: A Novel Top-View Perspective with Depth Camera-Based Lameness Classification
by San Chain Tun, Tsubasa Onizuka, Pyke Tin, Masaru Aikawa, Ikuo Kobayashi and Thi Thi Zin
J. Imaging 2024, 10(3), 67; https://doi.org/10.3390/jimaging10030067 - 8 Mar 2024
Viewed by 1404
Abstract
This study innovates livestock health management, utilizing a top-view depth camera for accurate cow lameness detection, classification, and precise segmentation through integration with a 3D depth camera and deep learning, distinguishing it from 2D systems. It underscores the importance of early lameness detection [...] Read more.
This study innovates livestock health management, utilizing a top-view depth camera for accurate cow lameness detection, classification, and precise segmentation through integration with a 3D depth camera and deep learning, distinguishing it from 2D systems. It underscores the importance of early lameness detection in cattle and focuses on extracting depth data from the cow’s body, with a specific emphasis on the back region’s maximum value. Precise cow detection and tracking are achieved through the Detectron2 framework and Intersection Over Union (IOU) techniques. Across a three-day testing period, with observations conducted twice daily with varying cow populations (ranging from 56 to 64 cows per day), the study consistently achieves an impressive average detection accuracy of 99.94%. Tracking accuracy remains at 99.92% over the same observation period. Subsequently, the research extracts the cow’s depth region using binary mask images derived from detection results and original depth images. Feature extraction generates a feature vector based on maximum height measurements from the cow’s backbone area. This feature vector is utilized for classification, evaluating three classifiers: Random Forest (RF), K-Nearest Neighbor (KNN), and Decision Tree (DT). The study highlights the potential of top-view depth video cameras for accurate cow lameness detection and classification, with significant implications for livestock health management. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

23 pages, 3522 KiB  
Article
Cherry Tree Crown Extraction Using Machine Learning Based on Images from UAVs
by Vasileios Moysiadis, Ilias Siniosoglou, Georgios Kokkonis, Vasileios Argyriou, Thomas Lagkas, Sotirios K. Goudos and Panagiotis Sarigiannidis
Agriculture 2024, 14(2), 322; https://doi.org/10.3390/agriculture14020322 - 18 Feb 2024
Viewed by 1425
Abstract
Remote sensing stands out as one of the most widely used operations in the field. In this research area, UAVs offer full coverage of large cultivation areas in a few minutes and provide orthomosaic images with valuable information based on multispectral cameras. Especially [...] Read more.
Remote sensing stands out as one of the most widely used operations in the field. In this research area, UAVs offer full coverage of large cultivation areas in a few minutes and provide orthomosaic images with valuable information based on multispectral cameras. Especially for orchards, it is helpful to isolate each tree and then calculate the preferred vegetation indices separately. Thus, tree detection and crown extraction is another important research area in the domain of Smart Farming. In this paper, we propose an innovative tree detection method based on machine learning, designed to isolate each individual tree in an orchard. First, we evaluate the effectiveness of Detectron2 and YOLOv8 object detection algorithms in identifying individual trees and generating corresponding masks. Both algorithms yield satisfactory results in cherry tree detection, with the best F1-Score up to 94.85%. In the second stage, we apply a method based on OTSU thresholding to improve the provided masks and precisely cover the crowns of the detected trees. The proposed method achieves 85.30% on IoU while Detectron2 gives 79.83% and YOLOv8 has 75.36%. Our work uses cherry trees, but it is easy to apply to any other tree species. We believe that our approach will be a key factor in enabling health monitoring for each individual tree. Full article
Show Figures

Figure 1

27 pages, 47076 KiB  
Article
Customized Tracking Algorithm for Robust Cattle Detection and Tracking in Occlusion Environments
by Wai Hnin Eaindrar Mg, Pyke Tin, Masaru Aikawa, Ikuo Kobayashi, Yoichiro Horii, Kazuyuki Honkawa and Thi Thi Zin
Sensors 2024, 24(4), 1181; https://doi.org/10.3390/s24041181 - 11 Feb 2024
Viewed by 993
Abstract
Ensuring precise calving time prediction necessitates the adoption of an automatic and precisely accurate cattle tracking system. Nowadays, cattle tracking can be challenging due to the complexity of their environment and the potential for missed or false detections. Most existing deep-learning tracking algorithms [...] Read more.
Ensuring precise calving time prediction necessitates the adoption of an automatic and precisely accurate cattle tracking system. Nowadays, cattle tracking can be challenging due to the complexity of their environment and the potential for missed or false detections. Most existing deep-learning tracking algorithms face challenges when dealing with track-ID switch cases caused by cattle occlusion. To address these concerns, the proposed research endeavors to create an automatic cattle detection and tracking system by leveraging the remarkable capabilities of Detectron2 while embedding tailored modifications to make it even more effective and efficient for a variety of applications. Additionally, the study conducts a comprehensive comparison of eight distinct deep-learning tracking algorithms, with the objective of identifying the most optimal algorithm for achieving precise and efficient individual cattle tracking. This research focuses on tackling occlusion conditions and track-ID increment cases for miss detection. Through a comparison of various tracking algorithms, we discovered that Detectron2, coupled with our customized tracking algorithm (CTA), achieves 99% in detecting and tracking individual cows for handling occlusion challenges. Our algorithm stands out by successfully overcoming the challenges of miss detection and occlusion problems, making it highly reliable even during extended periods in a crowded calving pen. Full article
(This article belongs to the Special Issue Machine Learning and Sensors Technology in Agriculture)
Show Figures

Figure 1

15 pages, 16300 KiB  
Article
A Novel Technique Based on Machine Learning for Detecting and Segmenting Trees in Very High Resolution Digital Images from Unmanned Aerial Vehicles
by Loukas Kouvaras and George P. Petropoulos
Cited by 2 | Viewed by 2053
Abstract
The present study proposes a technique for automated tree crown detection and segmentation in digital images derived from unmanned aerial vehicles (UAVs) using a machine learning (ML) algorithm named Detectron2. The technique, which was developed in the python programming language, receives as input [...] Read more.
The present study proposes a technique for automated tree crown detection and segmentation in digital images derived from unmanned aerial vehicles (UAVs) using a machine learning (ML) algorithm named Detectron2. The technique, which was developed in the python programming language, receives as input images with object boundary information. After training on sets of data, it is able to set its own object boundaries. In the present study, the algorithm was trained for tree crown detection and segmentation. The test bed consisted of UAV imagery of an agricultural field of tangerine trees in the city of Palermo in Sicily, Italy. The algorithm’s output was the accurate boundary of each tree. The output from the developed algorithm was compared against the results of tree boundary segmentation generated by the Support Vector Machine (SVM) supervised classifier, which has proven to be a very promising object segmentation method. The results from the two methods were compared with the most accurate yet time-consuming method, direct digitalization. For accuracy assessment purposes, the detected area efficiency, skipped area rate, and false area rate were estimated for both methods. The results showed that the Detectron2 algorithm is more efficient in segmenting the relevant data when compared to the SVM model in two out of the three indices. Specifically, the Detectron2 algorithm exhibited a 0.959% and 0.041% fidelity rate on the common detected and skipped area rate, respectively, when compared with the digitalization method. The SVM exhibited 0.902% and 0.097%, respectively. On the other hand, the SVM classification generated better false detected area results, with 0.035% accuracy, compared to the Detectron2 algorithm’s 0.056%. Having an accurate estimation of the tree boundaries from the Detectron2 algorithm, the tree health assessment was evaluated last. For this to happen, three different vegetation indices were produced (NDVI, GLI and VARI). All those indices showed tree health as average. All in all, the results demonstrated the ability of the technique to detect and segment trees from UAV imagery. Full article
Show Figures

Figure 1

19 pages, 7710 KiB  
Article
Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards
by Marya Butt, Nick Glas, Jaimy Monsuur, Ruben Stoop and Ander de Keijzer
AI 2024, 5(1), 72-90; https://doi.org/10.3390/ai5010005 - 22 Dec 2023
Cited by 2 | Viewed by 5272
Abstract
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance [...] Read more.
Scoring targets in shooting sports is a crucial and time-consuming task that relies on manually counting bullet holes. This paper introduces an automatic score detection model using object detection techniques. The study contributes to the field of computer vision by comparing the performance of seven models (belonging to two different architectural setups) and by making the dataset publicly available. Another value-added aspect is the inclusion of three variants of the object detection model, YOLOv8, recently released in 2023 (at the time of writing). Five of the used models are single-shot detectors, while two belong to the two-shot detectors category. The dataset was manually captured from the shooting range and expanded by generating more versatile data using Python code. Before the dataset was trained to develop models, it was resized (640 × 640) and augmented using Roboflow API. The trained models were then assessed on the test dataset, and their performance was compared using matrices like mAP50, mAP50-90, precision, and recall. The results showed that YOLOv8 models can detect multiple objects with good confidence scores. Among these models, YOLOv8m performed the best, with the highest mAP50 value of 96.7%, followed by the performance of YOLOv8s with the mAP50 value of 96.5%. It is suggested that if the system is to be implemented in a real-time environment, YOLOv8s is a better choice since it took significantly less inference time (2.3 ms) than YOLOv8m (5.7 ms) and yet generated a competitive mAP50 of 96.5%. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

17 pages, 5322 KiB  
Article
Quantitative and Qualitative Analysis of Agricultural Fields Based on Aerial Multispectral Images Using Neural Networks
by Krzysztof Strzępek, Mateusz Salach, Bartosz Trybus, Karol Siwiec, Bartosz Pawłowicz and Andrzej Paszkiewicz
Sensors 2023, 23(22), 9251; https://doi.org/10.3390/s23229251 - 17 Nov 2023
Viewed by 1155
Abstract
This article presents an integrated system that uses the capabilities of unmanned aerial vehicles (UAVs) to perform a comprehensive crop analysis, combining qualitative and quantitative evaluations for efficient agricultural management. A convolutional neural network-based model, Detectron2, serves as the foundation for detecting and [...] Read more.
This article presents an integrated system that uses the capabilities of unmanned aerial vehicles (UAVs) to perform a comprehensive crop analysis, combining qualitative and quantitative evaluations for efficient agricultural management. A convolutional neural network-based model, Detectron2, serves as the foundation for detecting and segmenting objects of interest in acquired aerial images. This model was trained on a dataset prepared using the COCO format, which features a variety of annotated objects. The system architecture comprises a frontend and a backend component. The frontend facilitates user interaction and annotation of objects on multispectral images. The backend involves image loading, project management, polygon handling, and multispectral image processing. For qualitative analysis, users can delineate regions of interest using polygons, which are then subjected to analysis using the Normalized Difference Vegetation Index (NDVI) or Optimized Soil Adjusted Vegetation Index (OSAVI). For quantitative analysis, the system deploys a pre-trained model capable of object detection, allowing for the counting and localization of specific objects, with a focus on young lettuce crops. The prediction quality of the model has been calculated using the AP (Average Precision) metric. The trained neural network exhibited robust performance in detecting objects, even within small images. Full article
Show Figures

Figure 1

14 pages, 1760 KiB  
Article
Assessing Acetabular Index Angle in Infants: A Deep Learning-Based Novel Approach
by Farmanullah Jan, Atta Rahman, Roaa Busaleh, Haya Alwarthan, Samar Aljaser, Sukainah Al-Towailib, Safiyah Alshammari, Khadeejah Rasheed Alhindi, Asrar Almogbil, Dalal A. Bubshait and Mohammed Imran Basheer Ahmed
J. Imaging 2023, 9(11), 242; https://doi.org/10.3390/jimaging9110242 - 6 Nov 2023
Cited by 5 | Viewed by 2938
Abstract
Developmental dysplasia of the hip (DDH) is a disorder characterized by abnormal hip development that frequently manifests in infancy and early childhood. Preventing DDH from occurring relies on a timely and accurate diagnosis, which requires careful assessment by medical specialists during early X-ray [...] Read more.
Developmental dysplasia of the hip (DDH) is a disorder characterized by abnormal hip development that frequently manifests in infancy and early childhood. Preventing DDH from occurring relies on a timely and accurate diagnosis, which requires careful assessment by medical specialists during early X-ray scans. However, this process can be challenging for medical personnel to achieve without proper training. To address this challenge, we propose a computational framework to detect DDH in pelvic X-ray imaging of infants that utilizes a pipelined deep learning-based technique consisting of two stages: instance segmentation and keypoint detection models to measure acetabular index angle and assess DDH affliction in the presented case. The main aim of this process is to provide an objective and unified approach to DDH diagnosis. The model achieved an average pixel error of 2.862 ± 2.392 and an error range of 2.402 ± 1.963° for the acetabular angle measurement relative to the ground truth annotation. Ultimately, the deep-learning model will be integrated into the fully developed mobile application to make it easily accessible for medical specialists to test and evaluate. This will reduce the burden on medical specialists while providing an accurate and explainable DDH diagnosis for infants, thereby increasing their chances of successful treatment and recovery. Full article
(This article belongs to the Special Issue Advances in Image Analysis: Shapes, Textures and Multifractals)
Show Figures

Figure 1

16 pages, 3870 KiB  
Article
Segmentation and Phenotype Calculation of Rapeseed Pods Based on YOLO v8 and Mask R-Convolution Neural Networks
by Nan Wang, Hongbo Liu, Yicheng Li, Weijun Zhou and Mingquan Ding
Plants 2023, 12(18), 3328; https://doi.org/10.3390/plants12183328 - 20 Sep 2023
Cited by 12 | Viewed by 4193
Abstract
Rapeseed is a significant oil crop, and the size and length of its pods affect its productivity. However, manually counting the number of rapeseed pods and measuring the length, width, and area of the pod takes time and effort, especially when there are [...] Read more.
Rapeseed is a significant oil crop, and the size and length of its pods affect its productivity. However, manually counting the number of rapeseed pods and measuring the length, width, and area of the pod takes time and effort, especially when there are hundreds of rapeseed resources to be assessed. This work created two state-of-the-art deep learning-based methods to identify rapeseed pods and related pod attributes, which are then implemented in rapeseed pots to improve the accuracy of the rapeseed yield estimate. One of these methods is YOLO v8, and the other is the two-stage model Mask R-CNN based on the framework Detectron2. The YOLO v8n model and the Mask R-CNN model with a Resnet101 backbone in Detectron2 both achieve precision rates exceeding 90%. The recognition results demonstrated that both models perform well when graphic images of rapeseed pods are segmented. In light of this, we developed a coin-based approach for estimating the size of rapeseed pods and tested it on a test dataset made up of nine different species of Brassica napus and one of Brassica campestris L. The correlation coefficients between manual measurement and machine vision measurement of length and width were calculated using statistical methods. The length regression coefficient of both methods was 0.991, and the width regression coefficient was 0.989. In conclusion, for the first time, we utilized deep learning techniques to identify the characteristics of rapeseed pods while concurrently establishing a dataset for rapeseed pods. Our suggested approaches were successful in segmenting and counting rapeseed pods precisely. Our approach offers breeders an effective strategy for digitally analyzing phenotypes and automating the identification and screening process, not only in rapeseed germplasm resources but also in leguminous plants, like soybeans that possess pods. Full article
Show Figures

Figure 1

10 pages, 12278 KiB  
Article
Application of Deep Learning Techniques for Detection of Pneumothorax in Chest Radiographs
by Lawrence Y. Deng, Xiang-Yann Lim, Tang-Yun Luo, Ming-Hsun Lee and Tzu-Ching Lin
Sensors 2023, 23(17), 7369; https://doi.org/10.3390/s23177369 - 24 Aug 2023
Cited by 1 | Viewed by 1438
Abstract
With the advent of Artificial Intelligence (AI) and even more so recently in the field of Machine Learning (ML), there has been rapid progress across the field. One of the prominent examples is image recognition in the medical category, such as X-ray imaging, [...] Read more.
With the advent of Artificial Intelligence (AI) and even more so recently in the field of Machine Learning (ML), there has been rapid progress across the field. One of the prominent examples is image recognition in the medical category, such as X-ray imaging, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI). It has the potential to alleviate a doctor’s heavy workload of sifting through large quantities of images. Due to the rising attention to lung-related diseases, such as pneumothorax and nodules, ML is being incorporated into the field in the hope of alleviating the already strained medical resources. In this study, we proposed a system that can detect pneumothorax diseases reliably. By comparing multiple models and hyperparameter configurations, we recommend a model for hospitals, as its focus on minimizing false positives aligns with the precision required by medical professionals. Through our cooperation with Poh-Ai Hospital, we acquired a total of over 8000 X-ray images, with more than 1000 of them from pneumothorax patients. We hope that by integrating AI systems into the automated process of scanning chest X-ray images with various diseases, more resources will be available in the already strained medical systems. Our proposed system showed that the best model that is used for transfer learning from our dataset performed with an AP of 51.57 and an AP75 of 61.40, with accuracy at 93.89%, a false positive of 1.12%, and a false negative of 4.99%. Based on the feedback from practicing doctors, they are more wary of false positives. For their use case, we recommend another model due to the lower false positive rate and higher accuracy compared with other models, which in our test shows a rate of only 0.88% and 95.68%, demonstrating the feasibility of the research. This promising result showed that it could be utilized in other types of diseases and expand to more hospitals and medical organizations, potentially benefitting more people. Full article
(This article belongs to the Special Issue Electronic Materials and Sensors Innovation and Application)
Show Figures

Figure 1

32 pages, 14470 KiB  
Article
Data-Driven Air Quality and Environmental Evaluation for Cattle Farms
by Jennifer Hu, Rushikesh Jagtap, Rishikumar Ravichandran, Chitra Priyaa Sathya Moorthy, Nataliya Sobol, Jane Wu and Jerry Gao
Atmosphere 2023, 14(5), 771; https://doi.org/10.3390/atmos14050771 - 23 Apr 2023
Cited by 3 | Viewed by 2364
Abstract
The expansion of agricultural practices and the raising of animals are key contributors to air pollution. Cattle farms contain hazardous gases, so we developed a cattle farm air pollution analyzer to count the number of cattle and provide comprehensive statistics on different air [...] Read more.
The expansion of agricultural practices and the raising of animals are key contributors to air pollution. Cattle farms contain hazardous gases, so we developed a cattle farm air pollution analyzer to count the number of cattle and provide comprehensive statistics on different air pollutant concentrations based on severity over various time periods. The modeling was performed in two parts: the first stage focused on object detection using satellite data of farm images to identify and count the number of cattle; the second stage predicted the next hour air pollutant concentration of the seven cattle farm air pollutants considered. The output from the second stage was then visualized based on severity, and analytics were performed on the historical data. The visualization illustrates the relationship between cattle count and air pollutants, an important factor for analyzing the pollutant concentration trend. We proposed the models Detectron2, YOLOv4, RetinaNet, and YOLOv5 for the first stage, and LSTM (single/multi lag), CNN-LSTM, and Bi-LSTM for the second stage. YOLOv5 performed best in stage one with an average precision of 0.916 and recall of 0.912, with the average precision and recall for all models being above 0.87. For stage two, CNN-LSTM performed well with an MAE of 3.511 and an MAPE of 0.016, while a stacked model had an MAE of 5.010 and an MAPE of 0.023. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

30 pages, 11564 KiB  
Article
Lettuce Production in Intelligent Greenhouses—3D Imaging and Computer Vision for Plant Spacing Decisions
by Anna Selini Petropoulou, Bart van Marrewijk, Feije de Zwart, Anne Elings, Monique Bijlaard, Tim van Daalen, Guido Jansen and Silke Hemming
Sensors 2023, 23(6), 2929; https://doi.org/10.3390/s23062929 - 8 Mar 2023
Cited by 13 | Viewed by 5371
Abstract
Recent studies indicate that food demand will increase by 35–56% over the period 2010–2050 due to population increase, economic development, and urbanization. Greenhouse systems allow for the sustainable intensification of food production with demonstrated high crop production per cultivation area. Breakthroughs in resource-efficient [...] Read more.
Recent studies indicate that food demand will increase by 35–56% over the period 2010–2050 due to population increase, economic development, and urbanization. Greenhouse systems allow for the sustainable intensification of food production with demonstrated high crop production per cultivation area. Breakthroughs in resource-efficient fresh food production merging horticultural and AI expertise take place with the international competition “Autonomous Greenhouse Challenge”. This paper describes and analyzes the results of the third edition of this competition. The competition’s goal is the realization of the highest net profit in fully autonomous lettuce production. Two cultivation cycles were conducted in six high-tech greenhouse compartments with operational greenhouse decision-making realized at a distance and individually by algorithms of international participating teams. Algorithms were developed based on time series sensor data of the greenhouse climate and crop images. High crop yield and quality, short growing cycles, and low use of resources such as energy for heating, electricity for artificial light, and CO2 were decisive in realizing the competition’s goal. The results highlight the importance of plant spacing and the moment of harvest decisions in promoting high crop growth rates while optimizing greenhouse occupation and resource use. In this paper, images taken with depth cameras (RealSense) for each greenhouse were used by computer vision algorithms (Deepabv3+ implemented in detectron2 v0.6) in deciding optimum plant spacing and the moment of harvest. The resulting plant height and coverage could be accurately estimated with an R2 of 0.976, and a mIoU of 98.2, respectively. These two traits were used to develop a light loss and harvest indicator to support remote decision-making. The light loss indicator could be used as a decision tool for timely spacing. Several traits were combined for the harvest indicator, ultimately resulting in a fresh weight estimation with a mean absolute error of 22 g. The proposed non-invasively estimated indicators presented in this article are promising traits to be used towards full autonomation of a dynamic commercial lettuce growing environment. Computer vision algorithms act as a catalyst in remote and non-invasive sensing of crop parameters, decisive for automated, objective, standardized, and data-driven decision making. However, spectral indexes describing lettuces growth and larger datasets than the currently accessible are crucial to address existing shortcomings between academic and industrial production systems that have been encountered in this work. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

13 pages, 26313 KiB  
Article
Detectron2 for Lesion Detection in Diabetic Retinopathy
by Farheen Chincholi and Harald Koestler
Algorithms 2023, 16(3), 147; https://doi.org/10.3390/a16030147 - 7 Mar 2023
Cited by 4 | Viewed by 2952
Abstract
Hemorrhages in the retinal fundus are a common symptom of both diabetic retinopathy and diabetic macular edema, making their detection crucial for early diagnosis and treatment. For this task, the aim is to evaluate the performance of two pre-trained and additionally fine-tuned models [...] Read more.
Hemorrhages in the retinal fundus are a common symptom of both diabetic retinopathy and diabetic macular edema, making their detection crucial for early diagnosis and treatment. For this task, the aim is to evaluate the performance of two pre-trained and additionally fine-tuned models from the Detectron2 model zoo, Faster R-CNN (R50-FPN) and Mask R-CNN (R50-FPN). Experiments show that the Mask R-CNN (R50-FPN) model provides highly accurate segmentation masks for each detected hemorrhage, with an accuracy of 99.34%. The Faster R-CNN (R50-FPN) model detects hemorrhages with an accuracy of 99.22%. The results of both models are compared using a publicly available image database with ground truth marked by experts. Overall, this study demonstrates that current models are valuable tools for early diagnosis and treatment of diabetic retinopathy and diabetic macular edema. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

12 pages, 2168 KiB  
Article
Artificial Intelligence of Object Detection in Skeletal Scintigraphy for Automatic Detection and Annotation of Bone Metastases
by Chiung-Wei Liao, Te-Chun Hsieh, Yung-Chi Lai, Yu-Ju Hsu, Zong-Kai Hsu, Pak-Ki Chan and Chia-Hung Kao
Diagnostics 2023, 13(4), 685; https://doi.org/10.3390/diagnostics13040685 - 12 Feb 2023
Cited by 3 | Viewed by 1870
Abstract
Background: When cancer has metastasized to bone, doctors must identify the site of the metastases for treatment. In radiation therapy, damage to healthy areas or missing areas requiring treatment should be avoided. Therefore, it is necessary to locate the precise bone metastasis area. [...] Read more.
Background: When cancer has metastasized to bone, doctors must identify the site of the metastases for treatment. In radiation therapy, damage to healthy areas or missing areas requiring treatment should be avoided. Therefore, it is necessary to locate the precise bone metastasis area. The bone scan is a commonly applied diagnostic tool for this purpose. However, its accuracy is limited by the nonspecific character of radiopharmaceutical accumulation. The study evaluated object detection techniques to improve the efficacy of bone metastases detection on bone scans. Methods: We retrospectively examined the data of 920 patients, aged 23 to 95 years, who underwent bone scans between May 2009 and December 2019. The bone scan images were examined using an object detection algorithm. Results: After reviewing the image reports written by physicians, nursing staff members annotated the bone metastasis sites as ground truths for training. Each set of bone scans contained anterior and posterior images with resolutions of 1024 × 256 pixels. The optimal dice similarity coefficient (DSC) in our study was 0.6640, which differs by 0.04 relative to the optimal DSC of different physicians (0.7040). Conclusions: Object detection can help physicians to efficiently notice bone metastases, decrease physician workload, and improve patient care. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)
Show Figures

Figure 1

Back to TopTop