Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,780)

Search Parameters:
Keywords = gaussian processes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2072 KiB  
Article
Evaluation of the Changes in the Strength of Clay Reinforced with Basalt Fiber Using Artificial Neural Network Model
by Yasemin Aslan Topçuoğlu, Zeynep Bala Duranay and Zülfü Gürocak
Appl. Sci. 2024, 14(22), 10362; https://doi.org/10.3390/app142210362 - 11 Nov 2024
Abstract
In this research, the impact of basalt fiber reinforcement on the unconfined compressive strength of clay soils was experimentally analyzed, and the collected data were utilized in an artificial neural network (ANN) to predict the unconfined compressive strength based on the basalt fiber [...] Read more.
In this research, the impact of basalt fiber reinforcement on the unconfined compressive strength of clay soils was experimentally analyzed, and the collected data were utilized in an artificial neural network (ANN) to predict the unconfined compressive strength based on the basalt fiber reinforcement ratio and length. For this purpose, two different lengths of basalt fiber (6 mm and 12 mm) were added to unreinforced bentonite clay at ratios of 0%, 1%, 2%, 3%, 4%, and 5%, and unconfined compressive tests were performed on the prepared reinforced clay samples to determine the unconfined compressive strength (qu) values. The evaluation of the obtained experimental results was carried out by creating ANN models. To validate the prediction capabilities of the ANN, a comparative analysis was performed using linear regression, support vector machines, and Gaussian process regression models. Ultimately, a five-fold cross-validation technique was employed to objectively evaluate the overall performance of the model. The evaluations revealed that the ANN model predictions using data obtained from experimental studies showed the highest accuracy and were in close agreement with the experimental results. Full article
Show Figures

Figure 1

19 pages, 5049 KiB  
Article
Low-Carbon Dispatch Method for Active Distribution Network Based on Carbon Emission Flow Theory
by Jiang Bian, Yang Wang, Zhaoshuai Dang, Tianchun Xiang, Zhiyong Gan and Ting Yang
Energies 2024, 17(22), 5610; https://doi.org/10.3390/en17225610 - 9 Nov 2024
Viewed by 299
Abstract
In the context of integrating renewable energy sources such as wind and solar energy sources into distribution networks, this paper proposes a proactive low-carbon dispatch model for active distribution networks based on carbon flow calculation theory. This model aims to achieve accurate carbon [...] Read more.
In the context of integrating renewable energy sources such as wind and solar energy sources into distribution networks, this paper proposes a proactive low-carbon dispatch model for active distribution networks based on carbon flow calculation theory. This model aims to achieve accurate carbon measurement across all operational aspects of distribution networks, reduce their carbon emissions through controlling unit operations, and ensure stable and safe operation. First, we propose a method for measuring carbon emission intensity on the source and network sides of active distribution networks with network losses, allowing for the calculation of total carbon emissions throughout the operation of networks and their equipment. Next, based on the carbon flow distribution of distribution networks, we construct a low-carbon dispatch model and formulate its optimization problem within a Markov Decision Process framework. We improve the Soft Actor–Critic (SAC) algorithm by adopting a Gaussian-distribution-based reward function to train and deploy agents for optimal low-carbon dispatch. Finally, the effectiveness of the proposed model and the superiority of the improved algorithm are demonstrated using a modified IEEE 33-bus distribution network test case. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

20 pages, 10237 KiB  
Article
A Leaf Chlorophyll Content Estimation Method for Populus deltoides (Populus deltoides Marshall) Using Ensembled Feature Selection Framework and Unmanned Aerial Vehicle Hyperspectral Data
by Zhulin Chen, Xuefeng Wang, Shijiao Qiao, Hao Liu, Mengmeng Shi, Xingjing Chen, Haiying Jiang and Huimin Zou
Forests 2024, 15(11), 1971; https://doi.org/10.3390/f15111971 - 8 Nov 2024
Viewed by 252
Abstract
Leaf chlorophyll content (LCC) is a key indicator in representing the photosynthetic capacity of Populus deltoides (Populus deltoides Marshall). Unmanned aerial vehicle (UAV) hyperspectral imagery provides an effective approach for LCC estimation, but the issue of band redundancy significantly impacts model accuracy [...] Read more.
Leaf chlorophyll content (LCC) is a key indicator in representing the photosynthetic capacity of Populus deltoides (Populus deltoides Marshall). Unmanned aerial vehicle (UAV) hyperspectral imagery provides an effective approach for LCC estimation, but the issue of band redundancy significantly impacts model accuracy and computational efficiency. Commonly used single feature selection algorithms not only fail to balance computational efficiency with optimal set search but also struggle to combine different regression algorithms under dynamic set conditions. This study proposes an ensemble feature selection framework to enhance LCC estimation accuracy using UAV hyperspectral data. Firstly, the embedded algorithm was improved by introducing the SHapley Additive exPlanations (SHAP) algorithm into the ranking system. A dynamic ranking strategy was then employed to remove bands in steps of 10, with LCC models developed at each step to identify the initial band subset based on estimation accuracy. Finally, the wrapper algorithm was applied using the initial band subset to search for the optimal band subset and develop the corresponding model. Three regression algorithms including gradient boosting regression trees (GBRT), support vector regression (SVR), and gaussian process regression (GPR) were combined with this framework for LCC estimation. The results indicated that the GBRT-Optimal model developed using 28 bands achieved the best performance with R2 of 0.848, RMSE of 1.454 μg/cm2 and MAE of 1.121 μg/cm2. Compared with a model performance that used all bands as inputs, this optimal model reduced the RMSE value by 24.37%. In addition to estimating biophysical and biochemical parameters, this method is also applicable to other hyperspectral imaging tasks. Full article
(This article belongs to the Special Issue Panoptic Segmentation of Tree Scenes from Mobile LiDAR Data)
Show Figures

Figure 1

19 pages, 4171 KiB  
Article
FastSLAM-MO-PSO: A Robust Method for Simultaneous Localization and Mapping in Mobile Robots Navigating Unknown Environments
by Xu Bian, Wanqiu Zhao, Ling Tang, Hong Zhao and Xuesong Mei
Appl. Sci. 2024, 14(22), 10268; https://doi.org/10.3390/app142210268 - 8 Nov 2024
Viewed by 328
Abstract
In the realm of mobile robotics, the capability to navigate and map uncharted territories is paramount, and Simultaneous Localization and Mapping (SLAM) stands as a cornerstone technology enabling this capability. While traditional SLAM methods like Extended Kalman Filter (EKF) and FastSLAM have made [...] Read more.
In the realm of mobile robotics, the capability to navigate and map uncharted territories is paramount, and Simultaneous Localization and Mapping (SLAM) stands as a cornerstone technology enabling this capability. While traditional SLAM methods like Extended Kalman Filter (EKF) and FastSLAM have made strides, they often struggle with the complexities of non-linear dynamics and non-Gaussian noise, particularly in dynamic settings. Moreover, these methods can be computationally intensive, limiting their applicability in real-world scenarios. This paper introduces an innovative enhancement to the FastSLAM framework by integrating Multi-Objective Particle Swarm Optimization (MO-PSO), aiming to bolster the robustness and accuracy of SLAM in mobile robots. We outline the theoretical underpinnings of FastSLAM and underscore its significance in robotic autonomy for mapping and exploration. Our approach innovates by crafting a specialized fitness function within the MO-PSO paradigm, which is instrumental in optimizing the particle distribution and addressing the challenges inherent in traditional particle filtering methods. This strategic fusion of MO-PSO with FastSLAM not only circumvents the pitfalls of particle degeneration, but also enhances the overall robustness and precision of the SLAM process across a spectrum of operational environments. Our empirical evaluation involves testing the proposed method on three distinct simulation benchmarks, comparing its performance against four other algorithms. The results indicate that our MO-PSO-enhanced FastSLAM method outperforms the traditional particle filtering approach by significantly reducing particle degeneration and ensuring more reliable and precise SLAM performance in challenging environments. This research demonstrates that the integration of MO-PSO with FastSLAM is a promising direction for improving SLAM in mobile robots, providing a robust solution for accurate mapping and localization even in complex and unknown settings. Full article
Show Figures

Figure 1

16 pages, 4330 KiB  
Article
Multiobjective Optimisation of Flotation Variables Using Controlled-NSGA-II and Paretosearch
by Bismark Amankwaa-Kyeremeh, Conor McCamley, Kathy Ehrig and Richmond K. Asamoah
Resources 2024, 13(11), 157; https://doi.org/10.3390/resources13110157 - 7 Nov 2024
Viewed by 295
Abstract
Finding the optimum operating points for the maximisation of flotation recovery and concentrate grade can be a very difficult task, owing to the inverse relationship that exists between these two key performance indicators. For this reason, techniques that can accurately find the trade-off [...] Read more.
Finding the optimum operating points for the maximisation of flotation recovery and concentrate grade can be a very difficult task, owing to the inverse relationship that exists between these two key performance indicators. For this reason, techniques that can accurately find the trade-off are critical for flotation process optimisation. This work extracted well-assessed Gaussian process predictive functions as objective functions for a comparative multiobjective optimisation study using the paretosearch algorithm (PA) and the controlled elitist non-dominated sorting genetic algorithm (controlled-NSGA-II). The main aim was the concomitant maximisation of the copper recovery and the concentrate grade. Comparison of the two applied techniques revealed that the PA discovered the best set of the pareto-optimal solution for both the recovery (93.4%) and concentrate-grade (17.4 wt.%) maximisation. Full article
Show Figures

Figure 1

23 pages, 5276 KiB  
Article
Generalized Gaussian Distribution Improved Permutation Entropy: A New Measure for Complex Time Series Analysis
by Kun Zheng, Hong-Seng Gan, Jun Kit Chaw, Sze-Hong Teh and Zhe Chen
Entropy 2024, 26(11), 960; https://doi.org/10.3390/e26110960 - 7 Nov 2024
Viewed by 376
Abstract
To enhance the performance of entropy algorithms in analyzing complex time series, generalized Gaussian distribution improved permutation entropy (GGDIPE) and its multiscale variant (MGGDIPE) are proposed in this paper. First, the generalized Gaussian distribution cumulative distribution function is employed for data normalization to [...] Read more.
To enhance the performance of entropy algorithms in analyzing complex time series, generalized Gaussian distribution improved permutation entropy (GGDIPE) and its multiscale variant (MGGDIPE) are proposed in this paper. First, the generalized Gaussian distribution cumulative distribution function is employed for data normalization to enhance the algorithm’s applicability across time series with diverse distributions. The algorithm further processes the normalized data using improved permutation entropy, which maintains both the absolute magnitude and temporal correlations of the signals, overcoming the equal value issue found in traditional permutation entropy (PE). Simulation results indicate that GGDIPE is less sensitive to parameter variations, exhibits strong noise resistance, accurately reveals the dynamic behavior of chaotic systems, and operates significantly faster than PE. Real-world data analysis shows that MGGDIPE provides markedly better separability for RR interval signals, EEG signals, bearing fault signals, and underwater acoustic signals compared to multiscale PE (MPE) and multiscale dispersion entropy (MDE). Notably, in underwater target recognition tasks, MGGDIPE achieves a classification accuracy of 97.5% across four types of acoustic signals, substantially surpassing the performance of MDE (70.5%) and MPE (62.5%). Thus, the proposed method demonstrates exceptional capability in processing complex time series. Full article
(This article belongs to the Special Issue Ordinal Pattern-Based Entropies: New Ideas and Challenges)
Show Figures

Figure 1

13 pages, 1302 KiB  
Article
Confidence Intervals for the Coefficient of Variation in Delta Inverse Gaussian Distributions
by Wasurat Khumpasee, Sa-Aat Niwitpong and Suparat Niwitpong
Symmetry 2024, 16(11), 1488; https://doi.org/10.3390/sym16111488 - 7 Nov 2024
Viewed by 352
Abstract
The inverse Gaussian distribution is characterized by its asymmetry and right-skewed shape, indicating a longer tail on the right side. This distribution represents extreme values in one direction, such as waiting times, stochastic processes, and accident counts. Moreover, depending on if the accident [...] Read more.
The inverse Gaussian distribution is characterized by its asymmetry and right-skewed shape, indicating a longer tail on the right side. This distribution represents extreme values in one direction, such as waiting times, stochastic processes, and accident counts. Moreover, depending on if the accident counts data can occur or not and may have zero value, the Delta Inverse Gaussian (Delta-IG) distribution is more suitable. The confidence interval (CI) for the coefficient of variation (CV) of the Delta-IG distribution in accident counts is essential for risk assessment, resource allocation, and the creation of transportation safety policies. Our objective is to establish CIs of CV for the Delta-IG population using various methods. We considered seven CI construction methods, namely Generalized Confidence Interval (GCI), Adjusted Generalized Confidence Interval (AGCI), Parametric Bootstrap Percentile Confidence Interval (PBPCI), Fiducial Confidence Interval (FCI), Fiducial Highest Posterior Density Confidence Interval (F-HPDCI), Bayesian Credible Interval (BCI), and Bayesian Highest Posterior Density Credible Interval (B-HPDCI). We utilized Monte Carlo simulations to assess the proposed CI technique for average widths (AWs) and coverage probability (CP). Our findings revealed that F-HPDCI and AGCI exhibited the most effective coverage probability and average widths. We applied these methods to generate CIs of CV for accident counts in India. Full article
Show Figures

Figure 1

19 pages, 3514 KiB  
Article
Measurement Model of Full-Width Roughness Considering Longitudinal Profile Weighting
by Yingchao Luo, Huazhen An, Xiaobing Li, Jinjin Cao, Na Miao and Rui Wang
Appl. Sci. 2024, 14(22), 10213; https://doi.org/10.3390/app142210213 - 7 Nov 2024
Viewed by 374
Abstract
This study proposes and establishes a roadway longitudinal profile weighting model and innovatively develops a process and method for evaluating road surface roughness. Initially, the Gaussian model is employed to accurately fit the distribution frequency of vehicle centerlines recorded in British Standard BS [...] Read more.
This study proposes and establishes a roadway longitudinal profile weighting model and innovatively develops a process and method for evaluating road surface roughness. Initially, the Gaussian model is employed to accurately fit the distribution frequency of vehicle centerlines recorded in British Standard BS 5400-10, and a generalized lateral distribution model of wheel trajectories is further derived. Corresponding model parameters are suggested for different types of lanes in this study. Subsequently, based on the proposed distribution model, a longitudinal profile weighting model for lanes is constructed. After adjusting the elevation of the cross-section, the equivalent longitudinal elevation of the roadway is calculated. Furthermore, this study presents a new indicator and method for assessing the roughness of the entire road surface, which comprehensively considers the elevations of all longitudinal profiles within the lane. To validate the effectiveness of the proposed new method and indicator, a comparative test was conducted using a vehicle-mounted profiler and a three-dimensional measurement system. The experimental results demonstrate significant improvements in measurement repeatability and scientific rigor, offering a new perspective and evaluation strategy for road performance assessment. Full article
Show Figures

Figure 1

21 pages, 3277 KiB  
Article
LiDAR-Based Modeling of Individual Tree Height to Crown Base in Picea crassifolia Kom. in Northern China: Comparing Bayesian, Gaussian Process, and Random Forest Approaches
by Zhaohui Yang, Hao Yang, Zeyu Zhou, Xiangxing Wan, Huiru Zhang and Guangshuang Duan
Forests 2024, 15(11), 1940; https://doi.org/10.3390/f15111940 - 4 Nov 2024
Viewed by 437
Abstract
This study compared hierarchical Bayesian, mixed-effects Gaussian process regression, and random forest models for predicting height to crown base (HCB) in Qinghai spruce (Picea crassifolia Kom.) forests using LiDAR-derived data. Both modeling approaches were applied to a dataset of 510 [...] Read more.
This study compared hierarchical Bayesian, mixed-effects Gaussian process regression, and random forest models for predicting height to crown base (HCB) in Qinghai spruce (Picea crassifolia Kom.) forests using LiDAR-derived data. Both modeling approaches were applied to a dataset of 510 trees from 16 plots in northern China. The models incorporated tree-level variables (height, diameter at breast height, crown projection area) and plot-level spatial competition indices. Model performance was evaluated using leave-one-plot-out cross-validation. The Gaussian mixed-effects process model (with an RMSE of 1.59 and MAE of 1.25) slightly outperformed the hierarchical Bayesian model and the random forest model. Both models identified LiDAR-derived tree height, DBH, and LiDAR-derived crown projection area as primary factors influencing HCB. The spatial competition index (SCI) emerged as the most effective random effect, with the lowest AIC and BIC values, highlighting the importance of local competition dynamics in HCB formation. Uncertainty analysis revealed consistent patterns across the predicted values, with an average relative uncertainty of 33.89% for the Gaussian process model. These findings provide valuable insights for forest management and suggest that incorporating spatial competition indices can enhance HCB predictions. Full article
(This article belongs to the Special Issue Estimation and Monitoring of Forest Biomass and Fuel Load Components)
Show Figures

Figure 1

19 pages, 9100 KiB  
Article
Deep Ultraviolet Excitation Photoluminescence Characteristics and Correlative Investigation of Al-Rich AlGaN Films on Sapphire
by Zhe Chuan Feng, Ming Tian, Xiong Zhang, Manika Tun Nafisa, Yao Liu, Jeffrey Yiin, Benjamin Klein and Ian Ferguson
Nanomaterials 2024, 14(21), 1769; https://doi.org/10.3390/nano14211769 - 4 Nov 2024
Viewed by 434
Abstract
AlGaN is attractive for fabricating deep ultraviolet (DUV) optoelectronic and electronic devices of light-emitting diodes (LEDs), photodetectors, high-electron-mobility field-effect transistors (HEMTs), etc. We investigated the quality and optical properties of AlxGa1−xN films with high Al fractions (60–87%) grown on [...] Read more.
AlGaN is attractive for fabricating deep ultraviolet (DUV) optoelectronic and electronic devices of light-emitting diodes (LEDs), photodetectors, high-electron-mobility field-effect transistors (HEMTs), etc. We investigated the quality and optical properties of AlxGa1−xN films with high Al fractions (60–87%) grown on sapphire substrates, including AlN nucleation and buffer layers, by metal–organic chemical vapor deposition (MOCVD). They were initially investigated by high-resolution X-ray diffraction (HR-XRD) and Raman scattering (RS). A set of formulas was deduced to precisely determine x(Al) from HR-XRD data. Screw dislocation densities in AlGaN and AlN layers were deduced. DUV (266 nm) excitation RS clearly exhibits AlGaN Raman features far superior to visible RS. The simulation on the AlGaN longitudinal optical (LO) phonon modes determined the carrier concentrations in the AlGaN layers. The spatial correlation model (SCM) analyses on E2(high) modes examined the AlGaN and AlN layer properties. These high-x(Al) AlxGa1−xN films possess large energy gaps Eg in the range of 5.0–5.6 eV and are excited by a DUV 213 nm (5.8 eV) laser for room temperature (RT) photoluminescence (PL) and temperature-dependent photoluminescence (TDPL) studies. The obtained RTPL bands were deconvoluted with two Gaussian bands, indicating cross-bandgap emission, phonon replicas, and variation with x(Al). TDPL spectra at 20–300 K of Al0.87Ga0.13N exhibit the T-dependences of the band-edge luminescence near 5.6 eV and the phonon replicas. According to the Arrhenius fitting diagram of the TDPL spectra, the activation energy (19.6 meV) associated with the luminescence process is acquired. In addition, the combined PL and time-resolved photoluminescence (TRPL) spectroscopic system with DUV 213 nm pulse excitation was applied to measure a typical AlGaN multiple-quantum well (MQW). The RT TRPL decay spectra were obtained at four wavelengths and fitted by two exponentials with fast and slow decay times of ~0.2 ns and 1–2 ns, respectively. Comprehensive studies on these Al-rich AlGaN epi-films and a typical AlGaN MQW are achieved with unique and significant results, which are useful to researchers in the field. Full article
Show Figures

Figure 1

23 pages, 21043 KiB  
Article
Advanced Cotton Boll Segmentation, Detection, and Counting Using Multi-Level Thresholding Optimized with an Anchor-Free Compact Central Attention Network Model
by Arathi Bairi and Uma N. Dulhare
Eng 2024, 5(4), 2839-2861; https://doi.org/10.3390/eng5040148 - 1 Nov 2024
Viewed by 312
Abstract
Nowadays, cotton boll detection techniques are becoming essential for weaving and textile industries based on the production of cotton. There are limited techniques developed to segment, detect, and count cotton bolls precisely. This analysis identified several limitations and issues with these techniques, including [...] Read more.
Nowadays, cotton boll detection techniques are becoming essential for weaving and textile industries based on the production of cotton. There are limited techniques developed to segment, detect, and count cotton bolls precisely. This analysis identified several limitations and issues with these techniques, including their complex structure, low performance, time complexity, poor quality data, and so on. A proposed technique was developed to overcome these issues and enhance the performance of the detection and counting of cotton bolls. Initially, data were gathered from the dataset, and a pre-processing stage was performed to enhance image quality. An adaptive Gaussian–Wiener filter (AGWF) was utilized to remove noise from the acquired images. Then, an improved Harris Hawks arithmetic optimization algorithm (IH2AOA) was used for segmentation. Finally, an anchor-free compact central attention cotton boll detection network (A-frC2AcbdN) was utilized for cotton boll detection and counting. The proposed technique utilized an annotated dataset extracted from weakly supervised cotton boll detection and counting, aiming to enhance the accuracy and efficiency in identifying and quantifying cotton bolls in the agricultural domain. The accuracy of the proposed technique was 94%, which is higher than that of other related techniques. Similarly, the precision, recall, F1-score, and specificity of the proposed technique were 93.8%, 92.99%, 93.48%, and 92.99%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications)
Show Figures

Figure 1

21 pages, 3521 KiB  
Article
Assessment of Line Outage Prediction Using Ensemble Learning and Gaussian Processes During Extreme Meteorological Events
by Altan Unlu and Malaquias Peña
Wind 2024, 4(4), 342-362; https://doi.org/10.3390/wind4040017 - 1 Nov 2024
Viewed by 340
Abstract
Climate change is increasing the occurrence of extreme weather events, such as intense windstorms, with a trend expected to worsen due to global warming. The growing intensity and frequency of these events are causing a significant number of failures in power distribution grids. [...] Read more.
Climate change is increasing the occurrence of extreme weather events, such as intense windstorms, with a trend expected to worsen due to global warming. The growing intensity and frequency of these events are causing a significant number of failures in power distribution grids. However, understanding the nature of extreme wind events and predicting their impact on distribution grids can help and prevent these issues, potentially mitigating their adverse effects. This study analyzes a structured method to predict distribution grid disruptions caused by extreme wind events. The method utilizes Machine Learning (ML) models, including K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Decision Trees (DTs), Gradient Boosting Machine (GBM), Gaussian Process (GP), Deep Neural Network (DNN), and Ensemble Learning which combines RF, SVM and GP to analyze synthetic failure data and predict power grid outages. The study utilized meteorological information, physical fragility curves, and scenario generation for distribution systems. The approach is validated by using five-fold cross-validation on the dataset, demonstrating its effectiveness in enhancing predictive capabilities against extreme wind events. Experimental results showed that the Ensemble Learning, GP, and SVM models outperformed other predictive models in the binary classification task of identifying failures or non-failures, achieving the highest performance metrics. Full article
Show Figures

Figure 1

18 pages, 4085 KiB  
Article
Population-Level Cell Trajectory Inference Based on Gaussian Distributions
by Xiang Chen, Yibing Ma, Yongle Shi, Yuhan Fu, Mengdi Nan, Qing Ren and Jie Gao
Biomolecules 2024, 14(11), 1396; https://doi.org/10.3390/biom14111396 - 1 Nov 2024
Viewed by 620
Abstract
In the past decade, inferring developmental trajectories from single-cell data has become a significant challenge in bioinformatics. RNA velocity, with its incorporation of directional dynamics, has significantly advanced the study of single-cell trajectories. However, as single-cell RNA sequencing technology evolves, it generates complex, [...] Read more.
In the past decade, inferring developmental trajectories from single-cell data has become a significant challenge in bioinformatics. RNA velocity, with its incorporation of directional dynamics, has significantly advanced the study of single-cell trajectories. However, as single-cell RNA sequencing technology evolves, it generates complex, high-dimensional data with high noise levels. Existing trajectory inference methods, which overlook cell distribution characteristics, may perform inadequately under such conditions. To address this, we introduce CPvGTI, a Gaussian distribution-based trajectory inference method. CPvGTI utilizes a Gaussian mixture model, optimized by the Expectation–Maximization algorithm, to construct new cell populations in the original data space. By integrating RNA velocity, CPvGTI employs Gaussian Process Regression to analyze the differentiation trajectories of these cell populations. To evaluate the performance of CPvGTI, we assess CPvGTI’s performance against several state-of-the-art methods using four structurally diverse simulated datasets and four real datasets. The simulation studies indicate that CPvGTI excels in pseudo-time prediction and structural reconstruction compared to existing methods. Furthermore, the discovery of new branch trajectories in human forebrain and mouse hematopoiesis datasets confirms CPvGTI’s superior performance. Full article
(This article belongs to the Section Bioinformatics and Systems Biology)
Show Figures

Figure 1

20 pages, 5672 KiB  
Article
A Novel Approach to Enhancing the Accuracy of Prediction in Ship Fuel Consumption
by Tianrui Zhou, Jinggai Wang, Qinyou Hu and Zhihui Hu
J. Mar. Sci. Eng. 2024, 12(11), 1954; https://doi.org/10.3390/jmse12111954 - 31 Oct 2024
Viewed by 533
Abstract
Ship fuel consumption plays a crucial role not only in understanding ships’ energy efficiency but also in gaining insights into their emissions. However, enhancing the accuracy of these predictions poses significant challenges due to data limitations and the methods employed. Due to these [...] Read more.
Ship fuel consumption plays a crucial role not only in understanding ships’ energy efficiency but also in gaining insights into their emissions. However, enhancing the accuracy of these predictions poses significant challenges due to data limitations and the methods employed. Due to these factors, such as data variability and equipment characteristics, ship fuel consumption exhibits certain fluctuations under specific conditions. Previous fuel consumption prediction methods primarily generate a single specific value, making it difficult to capture the volatility of and variability in fuel consumption. To overcome this limitation, this paper proposes a novel method that integrates Gaussian process prediction with quantile regression theory to perform interval predictions of ship fuel consumption, providing a range of possible outcomes. Through comparative analyses with traditional methods, the possibility of using the method is verified and its results are validated. The results indicate the following: (1) at a 95% confidence level, the proposed method achieves a prediction interval coverage probability of 0.98 and a prediction interval normalized average width of 0.123, which are significantly better than those of the existing backpropagation neural network (BPNN) and gradient boosting decision tree (GBDT) quantile regression models; (2) the prediction accuracy of the proposed method is 92% for point forecasts; and (3) the proposed method is applicable to main datasets, including both noon report and sensor datasets. These findings provide valuable insights into interval predictions of ship fuel consumption and highlight their potential applications in related fields, emphasizing the importance of accurate interval predictions in intelligent energy efficiency optimization. Full article
Show Figures

Figure 1

19 pages, 19678 KiB  
Article
Optimizing Thermoplastic Starch Film with Heteroscedastic Gaussian Processes in Bayesian Experimental Design Framework
by Gracie M. White, Amanda P. Siegel and Andres Tovar
Materials 2024, 17(21), 5345; https://doi.org/10.3390/ma17215345 - 31 Oct 2024
Viewed by 888
Abstract
The development of thermoplastic starch (TPS) films is crucial for fabricating sustainable and compostable plastics with desirable mechanical properties. However, traditional design of experiments (DOE) methods used in TPS development are often inefficient. They require extensive time and resources while frequently failing to [...] Read more.
The development of thermoplastic starch (TPS) films is crucial for fabricating sustainable and compostable plastics with desirable mechanical properties. However, traditional design of experiments (DOE) methods used in TPS development are often inefficient. They require extensive time and resources while frequently failing to identify optimal material formulations. As an alternative, adaptive experimental design methods based on Bayesian optimization (BO) principles have been recently proposed to streamline material development by iteratively refining experiments based on prior results. However, most implementations are not suited to manage the heteroscedastic noise inherently present in physical experiments. This work introduces a heteroscedastic Gaussian process (HGP) model within the BO framework to account for varying levels of uncertainty in the data, improve the accuracy of the predictions, and increase the overall experimental efficiency. The aim is to find the optimal TPS film composition that maximizes its elongation at break and tensile strength. To demonstrate the effectiveness of this approach, TPS films were prepared by mixing potato starch, distilled water, glycerol as a plasticizer, and acetic acid as a catalyst. After gelation, the mixture was degassed via centrifugation and molded into films, which were dried at room temperature. Tensile tests were conducted according to ASTM D638 standards. After five iterations and 30 experiments, the films containing 4.5 wt% plasticizer and 2.0 wt% starch exhibited the highest elongation at break (M = 96.7%, SD = 5.6%), while the films with 0.5 wt% plasticizer and 7.0 wt% starch demonstrated the highest tensile strength (M = 2.77 MPa, SD = 1.54 MPa). These results demonstrate the potential of the HGP model within a BO framework to improve material development efficiency and performance in TPS film and other potential material formulations. Full article
Show Figures

Figure 1

Back to TopTop