Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (208)

Search Parameters:
Keywords = AutoML

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 5826 KiB  
Article
An Efficient Task Implementation Modeling Framework with Multi-Stage Feature Selection and AutoML: A Case Study in Forest Fire Risk Prediction
by Ye Su, Longlong Zhao, Hongzhong Li, Xiaoli Li, Jinsong Chen and Yuankai Ge
Remote Sens. 2024, 16(17), 3190; https://doi.org/10.3390/rs16173190 - 29 Aug 2024
Viewed by 410
Abstract
As data science advances, automated machine learning (AutoML) gains attention for lowering barriers, saving time, and enhancing efficiency. However, with increasing data dimensionality, AutoML struggles with large-scale feature sets. Effective feature selection is crucial for efficient AutoML in multi-task applications. This study proposes [...] Read more.
As data science advances, automated machine learning (AutoML) gains attention for lowering barriers, saving time, and enhancing efficiency. However, with increasing data dimensionality, AutoML struggles with large-scale feature sets. Effective feature selection is crucial for efficient AutoML in multi-task applications. This study proposes an efficient modeling framework combining a multi-stage feature selection (MSFS) algorithm and AutoSklearn, a robust and efficient AutoML framework, to address high-dimensional data challenges. The MSFS algorithm includes three stages: mutual information gain (MIG), recursive feature elimination with cross-validation (RFECV), and a voting aggregation mechanism, ensuring comprehensive consideration of feature correlation, importance, and stability. Based on multi-source and time series remote sensing data, this study pioneers the application of AutoSklearn for forest fire risk prediction. Using this case study, we compare MSFS with five other feature selection (FS) algorithms, including three single FS algorithms and two hybrid FS algorithms. Results show that MSFS selects half of the original features (12/24), effectively handling collinearity (eliminating 11 out of 13 collinear feature groups) and increasing AutoSklearn’s success rate by 15%, outperforming two FS algorithms with the same number of features by 7% and 5%. Among the six FS algorithms and non-FS, MSFS demonstrates the highest prediction performance and stability with minimal variance (0.09%) across five evaluation metrics. MSFS efficiently filters redundant features, enhancing AutoSklearn’s operational efficiency and generalization ability in high-dimensional tasks. The MSFS–AutoSklearn framework significantly improves AutoML’s production efficiency and prediction accuracy, facilitating the efficient implementation of various real-world tasks and the wider application of AutoML. Full article
Show Figures

Figure 1

24 pages, 8399 KiB  
Article
Research on Fatigue Crack Propagation Prediction for Marine Structures Based on Automated Machine Learning
by Ping Li, Yuefu Yang and Chaohe Chen
J. Mar. Sci. Eng. 2024, 12(9), 1492; https://doi.org/10.3390/jmse12091492 - 29 Aug 2024
Viewed by 353
Abstract
In the field of offshore engineering, the prediction of the crack propagation behavior of metals is crucial for assessing the residual strength of structures. In this study, fatigue experiments were conducted for large-scale T-pipe joints of Q235 steel using the automatic machine learning [...] Read more.
In the field of offshore engineering, the prediction of the crack propagation behavior of metals is crucial for assessing the residual strength of structures. In this study, fatigue experiments were conducted for large-scale T-pipe joints of Q235 steel using the automatic machine learning (AutoML) technique to predict crack propagation. T-pipe specimens without initial cracks were designed for the study, and fatigue experiments were conducted at a load ratio of 0.067. Data such as strain and crack size were monitored by strain gauges and Alternating Current Potential Drop (ACPD) to construct a dataset for AutoML. Using the AutoML technique, the crack propagation rate and size were predicted, and the root mean square error (RMSE) was calculated. The prediction accuracy of the AutoML ensemble learning approach and the machine learning foundation model were evaluated. It was found that when the strain decreases by more than 3% compared to the initial value, crack initiation may occur in the vicinity of the monitoring point, at which point targeted measurements are required. In addition, the AutoML model utilizes ensemble learning techniques to show higher accuracy than a single machine learning model in the identification of crack initiation points and the prediction of crack propagation behavior. In the crack size prediction in this paper, the ensemble learning approach achieves an accuracy improvement of 5.65% over the traditional machine learning model. This result significantly enhances the reliability of crack prediction and provides a new technical approach for the next step of fatigue crack monitoring of large-scale T-tube joint structures in corrosive environments. Full article
Show Figures

Figure 1

18 pages, 1089 KiB  
Article
Forecasting Lattice and Point Spatial Data: Comparison of Unilateral and Multilateral SAR Models
by Carlo Grillenzoni
Forecasting 2024, 6(3), 700-717; https://doi.org/10.3390/forecast6030036 - 23 Aug 2024
Viewed by 227
Abstract
Spatial auto-regressive (SAR) models are widely used in geosciences for data analysis; their main feature is the presence of weight (W) matrices, which define the neighboring relationships between the spatial units. The statistical properties of parameter and forecast estimates strongly depend on the [...] Read more.
Spatial auto-regressive (SAR) models are widely used in geosciences for data analysis; their main feature is the presence of weight (W) matrices, which define the neighboring relationships between the spatial units. The statistical properties of parameter and forecast estimates strongly depend on the structure of such matrices. The least squares (LS) method is the most flexible and can estimate systems of large dimensions; however, it is biased in the presence of multilateral (sparse) matrices. Instead, the unilateral specification of SAR models provides triangular weight matrices that allow consistent LS estimates and sequential prediction functions. These two properties are strictly related and depend on the linear and recursive nature of the system. In this paper, we show the better performance in out-of-sample forecasting of unilateral SAR (estimated with LS), compared to multilateral SAR (estimated with maximum likelihood, ML). This conclusion is supported by numerical simulations and applications to real geological data, both on regular lattices and irregularly distributed points. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2024)
Show Figures

Figure 1

28 pages, 4455 KiB  
Article
Leveraging ChatGPT and Long Short-Term Memory in Recommender Algorithm for Self-Management of Cardiovascular Risk Factors
by Tatiana V. Afanasieva, Pavel V. Platov, Andrey V. Komolov and Andrey V. Kuzlyakin
Mathematics 2024, 12(16), 2582; https://doi.org/10.3390/math12162582 - 21 Aug 2024
Viewed by 640
Abstract
One of the new trends in the development of recommendation algorithms is the dissemination of their capabilities to support the population in managing their health, in particular cardiovascular health. Cardiovascular diseases (CVDs) affect people in their prime years and remain the main cause [...] Read more.
One of the new trends in the development of recommendation algorithms is the dissemination of their capabilities to support the population in managing their health, in particular cardiovascular health. Cardiovascular diseases (CVDs) affect people in their prime years and remain the main cause of morbidity and mortality worldwide, and their clinical treatment is expensive and time consuming. At the same time, about 80% of them can be prevented, according to the World Federation of Cardiology. The aim of this study is to develop and investigate a knowledge-based recommender algorithm for the self-management of CVD risk factors in adults at home. The proposed algorithm is based on the original user profile, which includes a predictive assessment of the presence of CVD. To obtain a predictive score for CVD presence, AutoML and LSTM models were studied on the Kaggle dataset, and it was shown that the LSTM model, with an accuracy of 0.88, outperformed the AutoML model. The algorithm recommendations generated contain items of three types: targeted, informational, and explanatory. For the first time, large language models, namely ChatGPT-3.5, ChatGPT-4, and ChatGPT-4.o, were leveraged and studied in creating explanations of the recommendations. The experiments show the following: (1) In explaining recommendations, ChatGPT-3.5, ChatGPT-4, and ChatGPT-4.o demonstrate a high accuracy of 71% to 91% and coherence with modern official guidelines of 84% to 92%. (2) The safety properties of ChatGPT-generated explanations estimated by doctors received the highest score of almost 100%. (3) On average, the stability and correctness of the GPT-4.o responses were more acceptable than those of other models for creating explanations. (4) The degree of user satisfaction with the recommendations obtained using the proposed algorithm was 88%, and the rating of the usefulness of the recommendations was 92%. Full article
(This article belongs to the Special Issue Advances in Recommender Systems and Intelligent Agents)
Show Figures

Figure 1

16 pages, 1331 KiB  
Article
Machine Learning in Medical Triage: A Predictive Model for Emergency Department Disposition
by Georgios Feretzakis, Aikaterini Sakagianni, Athanasios Anastasiou, Ioanna Kapogianni, Rozita Tsoni, Christina Koufopoulou, Dimitrios Karapiperis, Vasileios Kaldis, Dimitris Kalles and Vassilios S. Verykios
Appl. Sci. 2024, 14(15), 6623; https://doi.org/10.3390/app14156623 - 29 Jul 2024
Viewed by 641
Abstract
The study explores the application of automated machine learning (AutoML) using the MIMIC-IV-ED database to enhance decision-making in emergency department (ED) triage. We developed a predictive model that utilizes triage data to forecast hospital admissions, aiming to support medical staff by providing an [...] Read more.
The study explores the application of automated machine learning (AutoML) using the MIMIC-IV-ED database to enhance decision-making in emergency department (ED) triage. We developed a predictive model that utilizes triage data to forecast hospital admissions, aiming to support medical staff by providing an advanced decision-support system. The model, powered by H2O.ai’s AutoML platform, was trained on approximately 280,000 preprocessed records from the Beth Israel Deaconess Medical Center collected between 2011 and 2019. The selected Gradient Boosting Machine (GBM) model demonstrated an AUC ROC of 0.8256, indicating its efficacy in predicting patient dispositions. Key variables such as acuity and waiting hours were identified as significant predictors, emphasizing the model’s capability to integrate critical triage metrics into its predictions. However, challenges related to the complexity and heterogeneity of medical data, privacy concerns, and the need for model interpretability were addressed through the incorporation of Explainable AI (XAI) techniques. These techniques ensure the transparency of the predictive processes, fostering trust and facilitating ethical AI use in clinical settings. Future work will focus on external validation and expanding the model to include a broader array of variables from diverse healthcare environments, enhancing the model’s utility and applicability in global emergency care contexts. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

30 pages, 641 KiB  
Article
Strategies of Automated Machine Learning for Energy Sustainability in Green Artificial Intelligence
by Dagoberto Castellanos-Nieves and Luis García-Forte
Appl. Sci. 2024, 14(14), 6196; https://doi.org/10.3390/app14146196 - 16 Jul 2024
Viewed by 693
Abstract
Automated machine learning (AutoML) is recognized for its efficiency in facilitating model development due to its ability to perform tasks autonomously, without constant human intervention. AutoML automates the development and optimization of machine learning models, leading to high energy consumption due to the [...] Read more.
Automated machine learning (AutoML) is recognized for its efficiency in facilitating model development due to its ability to perform tasks autonomously, without constant human intervention. AutoML automates the development and optimization of machine learning models, leading to high energy consumption due to the large amount of calculations involved. Hyperparameter optimization algorithms, central to AutoML, can significantly impact its carbon footprint. This work introduces and investigates energy efficiency metrics for advanced hyperparameter optimization algorithms within AutoML. These metrics enable the evaluation and optimization of an algorithm’s energy consumption, considering accuracy, sustainability, and reduced environmental impact. The experimentation demonstrates the application of Green AI principles to AutoML hyperparameter optimization algorithms. It assesses the current sustainability of AutoML practices and proposes strategies to make them more environmentally friendly. The findings indicate a reduction of 28.7% in CO2e emissions when implementing the Green AI strategy, compared to the Red AI strategy. This improvement in sustainability is achieved with a minimal decrease of 0.51% in validation accuracy. This study emphasizes the importance of continuing to investigate sustainability throughout the life cycle of AI, aligning with the three fundamental pillars of sustainable development. Full article
(This article belongs to the Section Ecology Science and Engineering)
Show Figures

Figure 1

22 pages, 1858 KiB  
Article
Rule-Based DSL for Continuous Features and ML Models Selection in Multiple Sclerosis Research
by Wanqi Zhao, Karsten Wendt, Tjalf Ziemssen and Uwe Aßmann
Appl. Sci. 2024, 14(14), 6193; https://doi.org/10.3390/app14146193 - 16 Jul 2024
Viewed by 605
Abstract
Machine learning (ML) has emerged as a powerful tool in multiple sclerosis (MS) research, enabling more accurate diagnosis, prognosis prediction, and treatment optimization. However, the complexity of developing and deploying ML models poses challenges for domain experts without extensive programming knowledge. We propose [...] Read more.
Machine learning (ML) has emerged as a powerful tool in multiple sclerosis (MS) research, enabling more accurate diagnosis, prognosis prediction, and treatment optimization. However, the complexity of developing and deploying ML models poses challenges for domain experts without extensive programming knowledge. We propose a novel domain-specific language (DSL) that simplifies the process of selecting features, choosing appropriate ML models, and defining training rules for MS research. The DSL offers three approaches: AutoML for automated model and feature selection, manual selection for expert-guided customization, and a customizable mode allowing for fine-grained control. The DSL was implemented and evaluated using real-world MS data. By establishing task-specific DSLs, we have successfully identified workflows that enhance the filtering of ML models and features. This method is crucial in determining the T2-related MRI features that accurately predict both process speed time and walk speed. We assess the effectiveness of using our DSL to enhance ML models and identify feature importance within our private data, aiming to reveal the relationships between features. The proposed DSL empowers domain experts to leverage ML in MS research without extensive programming knowledge. By integrating MLOps practices, it streamlines the ML lifecycle, promoting trustworthy AI through explainability, interpretability, and collaboration. This work demonstrates the potential of DSLs in democratizing ML in MS and paves the way for future research in adaptive and evolving DSL architectures. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 1554 KiB  
Article
An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation
by Ava Vali, Sara Comai and Matteo Matteucci
Remote Sens. 2024, 16(14), 2561; https://doi.org/10.3390/rs16142561 - 12 Jul 2024
Viewed by 722
Abstract
Hyperspectral imaging holds significant promise in remote sensing applications, particularly for land cover and land-use classification, thanks to its ability to capture rich spectral information. However, leveraging hyperspectral data for accurate segmentation poses critical challenges, including the curse of dimensionality and the scarcity [...] Read more.
Hyperspectral imaging holds significant promise in remote sensing applications, particularly for land cover and land-use classification, thanks to its ability to capture rich spectral information. However, leveraging hyperspectral data for accurate segmentation poses critical challenges, including the curse of dimensionality and the scarcity of ground truth data, that hinder the accuracy and efficiency of machine learning approaches. This paper presents a holistic approach for adaptive optimized hyperspectral-based land cover and land-use segmentation using automated machine learning (AutoML). We address the challenges of high-dimensional hyperspectral data through a revamped machine learning pipeline, thus emphasizing feature engineering tailored to hyperspectral classification tasks. We propose a framework that dissects feature engineering into distinct steps, thus allowing for comprehensive model generation and optimization. This framework incorporates AutoML techniques to streamline model selection, hyperparameter tuning, and data versioning, thus ensuring robust and reliable segmentation results. Our empirical investigation demonstrates the efficacy of our approach in automating feature engineering and optimizing model performance, even without extensive ground truth data. By integrating automatic optimization strategies into the segmentation workflow, our approach offers a systematic, efficient, and scalable solution for hyperspectral-based land cover and land-use classification. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)
Show Figures

Figure 1

14 pages, 1801 KiB  
Article
Auto-Machine-Learning Models for Standardized Precipitation Index Prediction in North–Central Mexico
by Rafael Magallanes-Quintanar, Carlos E. Galván-Tejada, Jorge Isaac Galván-Tejada, Hamurabi Gamboa-Rosales, Santiago de Jesús Méndez-Gallegos and Antonio García-Domínguez
Climate 2024, 12(7), 102; https://doi.org/10.3390/cli12070102 - 12 Jul 2024
Cited by 1 | Viewed by 844
Abstract
Certain impacts of climate change could potentially be linked to alterations in rainfall patterns, including shifts in rainfall intensity or drought occurrences. Hence, predicting droughts can provide valuable assistance in mitigating the detrimental consequences associated with water scarcity, particularly in agricultural areas or [...] Read more.
Certain impacts of climate change could potentially be linked to alterations in rainfall patterns, including shifts in rainfall intensity or drought occurrences. Hence, predicting droughts can provide valuable assistance in mitigating the detrimental consequences associated with water scarcity, particularly in agricultural areas or densely populated urban regions. Employing predictive models to calculate drought indices can be a useful method for the effective characterization of drought conditions. This study applied an Auto-Machine-Learning approach to deploy Artificial Neural Network models, aiming to predict the Standardized Precipitation Index in four regions of Zacatecas, Mexico. Climatological time-series data spanning from 1979 to 2020 were utilized as predictive variables. The best models were found using performance metrics that yielded a Mean Squared Error, Mean Absolute Error, and Coefficient of Determination ranging from 0.0296 to 0.0388, 0.1214 to 0.1355, and 0.9342 to 0.9584, respectively, for the regions under study. As a result, the Auto-Machine-Learning approach successfully developed and tested Artificial Neural Network models that exhibited notable predictive capabilities when estimating the monthly Standardized Precipitation Index within the study region. Full article
Show Figures

Figure 1

21 pages, 4759 KiB  
Article
Transfer Learning Video Classification of Preserved, Mid-Range, and Reduced Left Ventricular Ejection Fraction in Echocardiography
by Pierre Decoodt, Daniel Sierra-Sosa, Laura Anghel, Giovanni Cuminetti, Eva De Keyzer and Marielle Morissens
Diagnostics 2024, 14(13), 1439; https://doi.org/10.3390/diagnostics14131439 - 5 Jul 2024
Viewed by 804
Abstract
Identifying patients with left ventricular ejection fraction (EF), either reduced [EF < 40% (rEF)], mid-range [EF 40–50% (mEF)], or preserved [EF > 50% (pEF)], is considered of primary clinical importance. An end-to-end video classification using AutoML in Google Vertex AI was applied to [...] Read more.
Identifying patients with left ventricular ejection fraction (EF), either reduced [EF < 40% (rEF)], mid-range [EF 40–50% (mEF)], or preserved [EF > 50% (pEF)], is considered of primary clinical importance. An end-to-end video classification using AutoML in Google Vertex AI was applied to echocardiographic recordings. Datasets balanced by majority undersampling, each corresponding to one out of three possible classifications, were obtained from the Standford EchoNet-Dynamic repository. A train–test split of 75/25 was applied. A binary video classification of rEF vs. not rEF demonstrated good performance (test dataset: ROC AUC score 0.939, accuracy 0.863, sensitivity 0.894, specificity 0.831, positive predicting value 0.842). A second binary classification of not pEF vs. pEF was slightly less performing (test dataset: ROC AUC score 0.917, accuracy 0.829, sensitivity 0.761, specificity 0.891, positive predicting value 0.888). A ternary classification was also explored, and lower performance was observed, mainly for the mEF class. A non-AutoML PyTorch implementation in open access confirmed the feasibility of our approach. With this proof of concept, end-to-end video classification based on transfer learning to categorize EF merits consideration for further evaluation in prospective clinical studies. Full article
(This article belongs to the Special Issue New Progress in Diagnosis and Management of Cardiovascular Diseases)
Show Figures

Graphical abstract

27 pages, 2005 KiB  
Article
Vertebral Column Pathology Diagnosis Using Ensemble Strategies Based on Supervised Machine Learning Techniques
by Alam Gabriel Rojas-López, Alejandro Rodríguez-Molina, Abril Valeria Uriarte-Arcia and Miguel Gabriel Villarreal-Cervantes
Healthcare 2024, 12(13), 1324; https://doi.org/10.3390/healthcare12131324 - 2 Jul 2024
Viewed by 869
Abstract
One expanding area of bioinformatics is medical diagnosis through the categorization of biomedical characteristics. Automatic medical strategies to boost the diagnostic through machine learning (ML) methods are challenging. They require a formal examination of their performance to identify the best conditions that enhance [...] Read more.
One expanding area of bioinformatics is medical diagnosis through the categorization of biomedical characteristics. Automatic medical strategies to boost the diagnostic through machine learning (ML) methods are challenging. They require a formal examination of their performance to identify the best conditions that enhance the ML method. This work proposes variants of the Voting and Stacking (VC and SC) ensemble strategies based on diverse auto-tuning supervised machine learning techniques to increase the efficacy of traditional baseline classifiers for the automatic diagnosis of vertebral column orthopedic illnesses. The ensemble strategies are created by first combining a complete set of auto-tuned baseline classifiers based on different processes, such as geometric, probabilistic, logic, and optimization. Next, the three most promising classifiers are selected among k-Nearest Neighbors (kNN), Naïve Bayes (NB), Logistic Regression (LR), Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Machine (SVM), Artificial Neural Networks (ANN), and Decision Tree (DT). The grid-search K-Fold cross-validation strategy is applied to auto-tune the baseline classifier hyperparameters. The performances of the proposed ensemble strategies are independently compared with the auto-tuned baseline classifiers. A concise analysis evaluates accuracy, precision, recall, F1-score, and ROC-ACU metrics. The analysis also examines the misclassified disease elements to find the most and least reliable classifiers for this specific medical problem. The results show that the VC ensemble strategy provides an improvement comparable to that of the best baseline classifier (the kNN). Meanwhile, when all baseline classifiers are included in the SC ensemble, this strategy surpasses 95% in all the evaluated metrics, standing out as the most suitable option for classifying vertebral column diseases. Full article
Show Figures

Figure 1

21 pages, 8221 KiB  
Article
Improving Short-Term Prediction of Ocean Fog Using Numerical Weather Forecasts and Geostationary Satellite-Derived Ocean Fog Data Based on AutoML
by Seongmun Sim, Jungho Im, Sihun Jung and Daehyeon Han
Remote Sens. 2024, 16(13), 2348; https://doi.org/10.3390/rs16132348 - 27 Jun 2024
Viewed by 637
Abstract
Ocean fog, a meteorological phenomenon characterized by reduced visibility due to tiny water droplets or ice particles, poses significant safety risks for maritime activities and coastal regions. Accurate prediction of ocean fog is crucial but challenging due to its complex formation mechanisms and [...] Read more.
Ocean fog, a meteorological phenomenon characterized by reduced visibility due to tiny water droplets or ice particles, poses significant safety risks for maritime activities and coastal regions. Accurate prediction of ocean fog is crucial but challenging due to its complex formation mechanisms and variability. This study proposes an advanced ocean fog prediction model for the Yellow Sea region, leveraging satellite-based detection and high-performance data-driven methods. We used Himawari-8 satellite data to obtain a lot of spatiotemporal ocean fog references and employed AutoML to integrate numerical weather prediction (NWP) outputs and sea surface temperature (SST)-related variables. The model demonstrated superior performance compared to traditional NWP-based methods, achieving high performance in both quantitative—probability of detection of 81.6%, false alarm ratio of 24.4%, f1 score of 75%, and proportion correct of 79.8%—and qualitative evaluations for 1 to 6 h lead times. Key contributing variables included relative humidity, accumulated shortwave radiation, and atmospheric pressure, indicating the importance of integrating diverse data sources. The study emphasizes the potential of using satellite-derived data to improve ocean fog prediction, while also addressing the challenges of overfitting and the need for more comprehensive reference data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing)
Show Figures

Figure 1

21 pages, 2475 KiB  
Article
Application of a Machine Learning-Based Classification Approach for Developing Host Protein Diagnostic Models for Infectious Disease
by Thomas F. Scherr, Christina E. Douglas, Kurt E. Schaecher, Randal J. Schoepp, Keersten M. Ricks and Charles J. Shoemaker
Diagnostics 2024, 14(12), 1290; https://doi.org/10.3390/diagnostics14121290 - 18 Jun 2024
Viewed by 801
Abstract
In recent years, infectious disease diagnosis has increasingly turned to host-centered approaches as a complement to pathogen-directed ones. The former, however, typically requires the interpretation of complex multiple biomarker datasets to arrive at an informative diagnostic outcome. This report describes a machine learning [...] Read more.
In recent years, infectious disease diagnosis has increasingly turned to host-centered approaches as a complement to pathogen-directed ones. The former, however, typically requires the interpretation of complex multiple biomarker datasets to arrive at an informative diagnostic outcome. This report describes a machine learning (ML)-based classification workflow that is intended as a template for researchers seeking to apply ML approaches for developing host-based infectious disease biomarker classifiers. As an example, we built a classification model that could accurately distinguish between three disease etiology classes: bacterial, viral, and normal in human sera using host protein biomarkers of known diagnostic utility. After collecting protein data from known disease samples, we trained a series of increasingly complex Auto-ML models until arriving at an optimized classifier that could differentiate viral, bacterial, and non-disease samples. Even when limited to a relatively small training set size, the model had robust diagnostic characteristics and performed well when faced with a blinded sample set. We present here a flexible approach for applying an Auto-ML-based workflow for the identification of host biomarker classifiers with diagnostic utility for infectious disease, and which can readily be adapted for multiple biomarker classes and disease states. Full article
Show Figures

Figure 1

18 pages, 4595 KiB  
Article
Design and Implementation of an Intensive Care Unit Command Center for Medical Data Fusion
by Wen-Sheng Feng, Wei-Cheng Chen, Jiun-Yi Lin, How-Yang Tseng, Chieh-Lung Chen, Ching-Yao Chou, Der-Yang Cho and Yi-Bing Lin
Sensors 2024, 24(12), 3929; https://doi.org/10.3390/s24123929 - 17 Jun 2024
Viewed by 583
Abstract
The rapid advancements in Artificial Intelligence of Things (AIoT) are pivotal for the healthcare sector, especially as the world approaches an aging society which will be reached by 2050. This paper presents an innovative AIoT-enabled data fusion system implemented at the CMUH Respiratory [...] Read more.
The rapid advancements in Artificial Intelligence of Things (AIoT) are pivotal for the healthcare sector, especially as the world approaches an aging society which will be reached by 2050. This paper presents an innovative AIoT-enabled data fusion system implemented at the CMUH Respiratory Intensive Care Unit (RICU) to address the high incidence of medical errors in ICUs, which are among the top three causes of mortality in healthcare facilities. ICU patients are particularly vulnerable to medical errors due to the complexity of their conditions and the critical nature of their care. We introduce a four-layer AIoT architecture designed to manage and deliver both real-time and non-real-time medical data within the CMUH-RICU. Our system demonstrates the capability to handle 22 TB of medical data annually with an average delay of 1.72 ms and a bandwidth of 65.66 Mbps. Additionally, we ensure the uninterrupted operation of the CMUH-RICU with a three-node streaming cluster (called Kafka), provided a failed node is repaired within 9 h, assuming a one-year node lifespan. A case study is presented where the AI application of acute respiratory distress syndrome (ARDS), leveraging our AIoT data fusion approach, significantly improved the medical diagnosis rate from 52.2% to 93.3% and reduced mortality from 56.5% to 39.5%. The results underscore the potential of AIoT in enhancing patient outcomes and operational efficiency in the ICU setting. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

11 pages, 1195 KiB  
Article
Gastric Emptying Scintigraphy Protocol Optimization Using Machine Learning for the Detection of Delayed Gastric Emptying
by Michalis F. Georgiou, Efrosyni Sfakianaki, Monica N. Diaz-Kanelidis and Baha Moshiree
Diagnostics 2024, 14(12), 1240; https://doi.org/10.3390/diagnostics14121240 - 13 Jun 2024
Viewed by 724
Abstract
Purpose: The purpose of this study is to examine the feasibility of a machine learning (ML) system for optimizing a gastric emptying scintigraphy (GES) protocol for the detection of delayed gastric emptying (GE), which is considered a primary indication for the diagnosis of [...] Read more.
Purpose: The purpose of this study is to examine the feasibility of a machine learning (ML) system for optimizing a gastric emptying scintigraphy (GES) protocol for the detection of delayed gastric emptying (GE), which is considered a primary indication for the diagnosis of gastroparesis. Methods: An ML model was developed using the JADBio AutoML artificial intelligence (AI) platform. This model employs the percent GE at various imaging time points following the ingestion of a standardized radiolabeled meal to predict normal versus delayed GE at the conclusion of the 4 h GES study. The model was trained and tested on a cohort of 1002 patients who underwent GES using a 70/30 stratified split ratio for training vs. testing. The ML software automated the generation of optimal predictive models by employing a combination of data preprocessing, appropriate feature selection, and predictive modeling analysis algorithms. Results: The area under the curve (AUC) of the receiver operating characteristic (ROC) curve was employed to evaluate the predictive modeling performance. Several models were developed using different combinations of imaging time points as input features and methodologies to achieve optimal output. By using GE values at time points 0.5 h, 1 h, 1.5 h, 2 h, and 2.5 h as input predictors of the 4 h outcome, the analysis produced an AUC of 90.7% and a balanced accuracy (BA) of 80.0% on the test set. This performance was comparable to the training set results (AUC = 91.5%, BA = 84.7%) within the 95% confidence interval (CI), demonstrating a robust predictive capability. Through feature selection, it was discovered that the 2.5 h GE value alone was statistically significant enough to predict the 4 h outcome independently, with a slightly increased test set performance (AUC = 92.4%, BA = 83.3%), thus emphasizing its dominance as the primary predictor for delayed GE. ROC analysis was also performed for single time imaging points at 1 h and 2 h to assess their independent predictiveness of the 4 h outcome. Furthermore, the ML model was tested for its ability to predict “flipping” cases with normal GE at 1 h and 2 h that became abnormal with delayed GE at 4 h. Conclusions: An AI/ML model was designed and trained for predicting delayed GE using a limited number of imaging time points in a 4 h GES clinical protocol. This study demonstrates the feasibility of employing ML for GES optimization in the detection of delayed GE and potentially shortening the protocol’s time length without compromising diagnostic power. Full article
(This article belongs to the Special Issue Intelligent Imaging in Nuclear Medicine—2nd Edition)
Show Figures

Figure 1

Back to TopTop