Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (193)

Search Parameters:
Keywords = Markov Random Field

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 534 KiB  
Article
Inference for Two-Parameter Birnbaum–Saunders Distribution Based on Type-II Censored Data with Application to the Fatigue Life of Aluminum Coupon Cuts
by Omar M. Bdair
Mathematics 2025, 13(4), 590; https://doi.org/10.3390/math13040590 - 11 Feb 2025
Viewed by 295
Abstract
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields [...] Read more.
This study addresses the problem of parameter estimation and prediction for type-II censored data from the two-parameter Birnbaum–Saunders (BS) distribution. The BS distribution is commonly used in reliability analysis, particularly in modeling fatigue life. Accurate estimation and prediction are crucial in many fields where censored data frequently appear, such as material science, medical studies and industrial applications. This paper presents both frequentist and Bayesian approaches to estimate the shape and scale parameters of the BS distribution, along with the prediction of unobserved failure times. Random data are generated from the BS distribution under type-II censoring, where a pre-specified number of failures (m) is observed. The generated data are used to calculate the Maximum Likelihood Estimation (MLE) and Bayesian inference and evaluate their performances. The Bayesian method employs Markov Chain Monte Carlo (MCMC) sampling for point predictions and credible intervals. We apply the methods to both datasets generated under type-II censoring and real-world data on the fatigue life of 6061-T6 aluminum coupons. Although the results show that the two methods yield similar parameter estimates, the Bayesian approach offers more flexible and reliable prediction intervals. Extensive R codes are used to explain the practical application of these methods. Our findings confirm the advantages of Bayesian inference in handling censored data, especially when prior information is available for estimation. This work not only supports the theoretical understanding of the BS distribution under type-II censoring but also provides practical tools for analyzing real data in reliability and survival studies. Future research will discuss extensions of these methods to the multi-sample progressive censoring model with larger datasets and the integration of degradation models commonly encountered in industrial applications. Full article
Show Figures

Figure 1

23 pages, 29165 KiB  
Article
Parallax-Tolerant Weakly-Supervised Pixel-Wise Deep Color Correction for Image Stitching of Pinhole Camera Arrays
by Yanzheng Zhang, Kun Gao, Zhijia Yang, Chenrui Li, Mingfeng Cai, Yuexin Tian, Haobo Cheng and Zhenyu Zhu
Sensors 2025, 25(3), 732; https://doi.org/10.3390/s25030732 - 25 Jan 2025
Viewed by 427
Abstract
Camera arrays typically use image-stitching algorithms to generate wide field-of-view panoramas, but parallax and color differences caused by varying viewing angles often result in noticeable artifacts in the stitching result. However, existing solutions can only address specific color difference issues and are ineffective [...] Read more.
Camera arrays typically use image-stitching algorithms to generate wide field-of-view panoramas, but parallax and color differences caused by varying viewing angles often result in noticeable artifacts in the stitching result. However, existing solutions can only address specific color difference issues and are ineffective for pinhole images with parallax. To overcome these limitations, we propose a parallax-tolerant weakly supervised pixel-wise deep color correction framework for the image stitching of pinhole camera arrays. The total framework consists of two stages. In the first stage, based on the differences between high-dimensional feature vectors extracted by a convolutional module, a parallax-tolerant color correction network with dynamic loss weights is utilized to adaptively compensate for color differences in overlapping regions. In the second stage, we introduce a gradient-based Markov Random Field inference strategy for correction coefficients of non-overlapping regions to harmonize non-overlapping regions with overlapping regions. Additionally, we innovatively propose an evaluation metric called Color Differences Across the Seam to quantitatively measure the naturalness of transitions across the composition seam. Comparative experiments conducted on popular datasets and authentic images demonstrate that our approach outperforms existing solutions in both qualitative and quantitative evaluations, effectively eliminating visible artifacts and producing natural-looking composite images. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 4903 KiB  
Article
Multiple Unmanned Aerial Vehicle Collaborative Target Search by DRL: A DQN-Based Multi-Agent Partially Observable Method
by Heng Xu and Dayong Zhu
Viewed by 576
Abstract
As Unmanned Aerial Vehicle (UAV) technology advances, UAVs have attracted widespread attention across military and civilian fields due to their low cost and flexibility. In unknown environments, UAVs can significantly reduce the risk of casualties and improve the safety and covertness when performing [...] Read more.
As Unmanned Aerial Vehicle (UAV) technology advances, UAVs have attracted widespread attention across military and civilian fields due to their low cost and flexibility. In unknown environments, UAVs can significantly reduce the risk of casualties and improve the safety and covertness when performing missions. Reinforcement Learning allows agents to learn optimal policies through trials in the environment, enabling UAVs to respond autonomously according to the real-time conditions. Due to the limitation of the observation range of UAV sensors, UAV target search missions face the challenge of partial observation. Based on this, Partially Observable Deep Q-Network (PODQN), which is a DQN-based algorithm is proposed. The PODQN algorithm utilizes the Gated Recurrent Unit (GRU) to remember the past observation information. It integrates the target network and decomposes the action value for better evaluation. In addition, the artificial potential field is introduced to solve the potential collision problem. The simulation environment for UAV target search is constructed through the custom Markov Decision Process. By comparing the PODQN algorithm with random strategy, DQN, Double DQN, Dueling DQN, VDN, QMIX, it is demonstrated that the proposed PODQN algorithm has the best performance under different agent configurations. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

8 pages, 1425 KiB  
Proceeding Paper
Enhanced Skin Lesion Classification Using Deep Learning, Integrating with Sequential Data Analysis: A Multiclass Approach
by Azmath Mubeen and Uma N. Dulhare
Eng. Proc. 2024, 78(1), 6; https://doi.org/10.3390/engproc2024078006 - 7 Jan 2025
Cited by 1 | Viewed by 383
Abstract
In dermatological research, accurately identifying different types of skin lesions, such as nodules, is essential for early diagnosis and effective treatment. This study introduces a novel method for classifying skin lesions, including nodules, by combining a unified attention (UA) network with deep convolutional [...] Read more.
In dermatological research, accurately identifying different types of skin lesions, such as nodules, is essential for early diagnosis and effective treatment. This study introduces a novel method for classifying skin lesions, including nodules, by combining a unified attention (UA) network with deep convolutional neural networks (DCNNs) for feature extraction. The UA network processes sequential data, such as patient histories, while long short-term memory (LSTM) networks track nodule progression. Additionally, Markov random fields (MRFs) enhance pattern recognition. The integrated system classifies lesions and evaluates whether they are responding to treatment or worsening, achieving 93% accuracy in distinguishing nodules, melanoma, and basal cell carcinoma. This system outperforms existing methods in precision and sensitivity, offering advancements in dermatological diagnostics. Full article
Show Figures

Figure 1

28 pages, 3873 KiB  
Article
Bayesian Inference for Long Memory Stochastic Volatility Models
by Pedro Chaim and Márcio Poletti Laurini
Econometrics 2024, 12(4), 35; https://doi.org/10.3390/econometrics12040035 - 27 Nov 2024
Viewed by 827
Abstract
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination [...] Read more.
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination of independent first-order autoregressive processes, lending itself to a Gaussian Markov Random Field representation. Our results from Monte Carlo experiments indicate that this approach exhibits small sample properties akin to those of Markov Chain Monte Carlo estimators. Additionally, it offers the advantages of reduced computational complexity and the mitigation of posterior convergence issues. We employ this methodology to estimate volatility dependency patterns for both the SP&500 index and major cryptocurrencies. We thoroughly assess the in-sample fit and extend our analysis to the construction of out-of-sample forecasts. Furthermore, we propose multi-factor extensions and apply this method to estimate volatility measurements from high-frequency data, underscoring its exceptional computational efficiency. Our simulation results demonstrate that the INLA methodology achieves comparable accuracy to traditional MCMC methods for estimating latent parameters and volatilities in LMSV models. The proposed model extensions show strong in-sample fit and out-of-sample forecast performance, highlighting the versatility of the INLA approach. This method is particularly advantageous in high-frequency contexts, where the computational demands of traditional posterior simulations are often prohibitive. Full article
Show Figures

Figure 1

30 pages, 10109 KiB  
Article
AI-Powered Approaches for Hypersurface Reconstruction in Multidimensional Spaces
by Kostadin Yotov, Emil Hadzhikolev, Stanka Hadzhikoleva and Mariyan Milev
Mathematics 2024, 12(20), 3285; https://doi.org/10.3390/math12203285 - 19 Oct 2024
Viewed by 1047
Abstract
The present article explores the possibilities of using artificial neural networks to solve problems related to reconstructing complex geometric surfaces in Euclidean and pseudo-Euclidean spaces, examining various approaches and techniques for training the networks. The main focus is on the possibility of training [...] Read more.
The present article explores the possibilities of using artificial neural networks to solve problems related to reconstructing complex geometric surfaces in Euclidean and pseudo-Euclidean spaces, examining various approaches and techniques for training the networks. The main focus is on the possibility of training a set of neural networks with information about the available surface points, which can then be used to predict and complete missing parts. A method is proposed for using separate neural networks that reconstruct surfaces in different spatial directions, employing various types of architectures, such as multilayer perceptrons, recursive networks, and feedforward networks. Experimental results show that artificial neural networks can successfully approximate both smooth surfaces and those containing singular points. The article presents the results with the smallest error, showcasing networks of different types, along with a technique for reconstructing geographic relief. A comparison is made between the results achieved by neural networks and those obtained using traditional surface approximation methods such as Bézier curves, k-nearest neighbors, principal component analysis, Markov random fields, conditional random fields, and convolutional neural networks. Full article
(This article belongs to the Special Issue Machine Learning and Evolutionary Algorithms: Theory and Applications)
Show Figures

Figure 1

11 pages, 1942 KiB  
Article
Environmental Quality, Extreme Heat, and Healthcare Expenditures
by Douglas A. Becker
Int. J. Environ. Res. Public Health 2024, 21(10), 1322; https://doi.org/10.3390/ijerph21101322 - 5 Oct 2024
Viewed by 1180
Abstract
Although the effects of the environment on human health are well-established, the literature on the relationship between the quality of the environment and expenditures on healthcare is relatively sparse and disjointed. In this study, the Environmental Quality Index developed by the Environmental Protection [...] Read more.
Although the effects of the environment on human health are well-established, the literature on the relationship between the quality of the environment and expenditures on healthcare is relatively sparse and disjointed. In this study, the Environmental Quality Index developed by the Environmental Protection Agency and heatwave days were compared against per capita Medicare spending at the county level. A general additive model with a Markov Random Field smoothing term was used for the analysis to ensure that spatial dependence did not undermine model results. The Environmental Quality Index was found to hold a statistically significant (p < 0.05), multifaceted nonlinear association with spending, as was the average seasonal maximum heat index. The same was not true of heatwave days, however. In a secondary analysis on the individual domains of the index, the social and built environment components were significantly related to spending, but the air, water, and land domains were not. These results provide initial support for the simultaneous benefits of healthcare financing systems to mitigate some dimensions of poor environmental quality and consistently high air temperatures. Full article
(This article belongs to the Special Issue Health Geography’s Contribution to Environmental Health Research)
Show Figures

Figure 1

29 pages, 9774 KiB  
Article
High-Resolution Spatiotemporal Forecasting with Missing Observations Including an Application to Daily Particulate Matter 2.5 Concentrations in Jakarta Province, Indonesia
by I Gede Nyoman Mindra Jaya and Henk Folmer
Mathematics 2024, 12(18), 2899; https://doi.org/10.3390/math12182899 - 17 Sep 2024
Viewed by 1196
Abstract
Accurate forecasting of high-resolution particulate matter 2.5 (PM2.5) levels is essential for the development of public health policy. However, datasets used for this purpose often contain missing observations. This study presents a two-stage approach to handle this problem. The first stage [...] Read more.
Accurate forecasting of high-resolution particulate matter 2.5 (PM2.5) levels is essential for the development of public health policy. However, datasets used for this purpose often contain missing observations. This study presents a two-stage approach to handle this problem. The first stage is a multivariate spatial time series (MSTS) model, used to generate forecasts for the sampled spatial units and to impute missing observations. The MSTS model utilizes the similarities between the temporal patterns of the time series of the spatial units to impute the missing data across space. The second stage is the high-resolution prediction model, which generates predictions that cover the entire study domain. The second stage faces the big N problem giving rise to complex memory and computational problems. As a solution to the big N problem, we propose a Gaussian Markov random field (GMRF) for innovations with the Matérn covariance matrix obtained from the corresponding Gaussian field (GF) matrix by means of the stochastic partial differential equation (SPDE) method and the finite element method (FEM). For inference, we propose Bayesian statistics and integrated nested Laplace approximation (INLA) in the R-INLA package. The above approach is demonstrated using daily data collected from 13 PM2.5 monitoring stations in Jakarta Province, Indonesia, for 1 January–31 December 2022. The first stage of the model generates PM2.5 forecasts for the 13 monitoring stations for the period 1–31 January 2023, imputing missing data by means of the MSTS model. To capture temporal trends in the PM2.5 concentrations, the model applies a first-order autoregressive process and a seasonal process. The second stage involves creating a high-resolution map for the period 1–31 January 2023, for sampled and non-sampled spatiotemporal units. It uses the MSTS-generated PM2.5 predictions for the sampled spatiotemporal units and observations of the covariate’s altitude, population density, and rainfall for sampled and non-samples spatiotemporal units. For the spatially correlated random effects, we apply a first-order random walk process. The validation of out-of-sample forecasts indicates a strong model fit with low mean squared error (0.001), mean absolute error (0.037), and mean absolute percentage error (0.041), and a high R² value (0.855). The analysis reveals that altitude and precipitation negatively impact PM2.5 concentrations, while population density has a positive effect. Specifically, a one-meter increase in altitude is linked to a 7.8% decrease in PM2.5, while a one-person increase in population density leads to a 7.0% rise in PM2.5. Additionally, a one-millimeter increase in rainfall corresponds to a 3.9% decrease in PM2.5. The paper makes a valuable contribution to the field of forecasting high-resolution PM2.5 levels, which is essential for providing detailed, accurate information for public health policy. The approach presents a new and innovative method for addressing the problem of missing data and high-resolution forecasting. Full article
(This article belongs to the Special Issue Advanced Statistical Application for Realistic Problems)
Show Figures

Figure 1

26 pages, 2887 KiB  
Article
Implicit Is Not Enough: Explicitly Enforcing Anatomical Priors inside Landmark Localization Models
by Simon Johannes Joham, Arnela Hadzic and Martin Urschler
Bioengineering 2024, 11(9), 932; https://doi.org/10.3390/bioengineering11090932 - 17 Sep 2024
Viewed by 1558
Abstract
The task of localizing distinct anatomical structures in medical image data is an essential prerequisite for several medical applications, such as treatment planning in orthodontics, bone-age estimation, or initialization of segmentation methods in automated image analysis tools. Currently, Anatomical Landmark Localization (ALL) is [...] Read more.
The task of localizing distinct anatomical structures in medical image data is an essential prerequisite for several medical applications, such as treatment planning in orthodontics, bone-age estimation, or initialization of segmentation methods in automated image analysis tools. Currently, Anatomical Landmark Localization (ALL) is mainly solved by deep-learning methods, which cannot guarantee robust ALL predictions; there may always be outlier predictions that are far from their ground truth locations due to out-of-distribution inputs. However, these localization outliers are detrimental to the performance of subsequent medical applications that rely on ALL results. The current ALL literature relies heavily on implicit anatomical constraints built into the loss function and network architecture to reduce the risk of anatomically infeasible predictions. However, we argue that in medical imaging, where images are generally acquired in a controlled environment, we should use stronger explicit anatomical constraints to reduce the number of outliers as much as possible. Therefore, we propose the end-to-end trainable Global Anatomical Feasibility Filter and Analysis (GAFFA) method, which uses prior anatomical knowledge estimated from data to explicitly enforce anatomical constraints. GAFFA refines the initial localization results of a U-Net by approximately solving a Markov Random Field (MRF) with a single iteration of the sum-product algorithm in a differentiable manner. Our experiments demonstrate that GAFFA outperforms all other landmark refinement methods investigated in our framework. Moreover, we show that GAFFA is more robust to large outliers than state-of-the-art methods on the studied X-ray hand dataset. We further motivate this claim by visualizing the anatomical constraints used in GAFFA as spatial energy heatmaps, which allowed us to find an annotation error in the hand dataset not previously discussed in the literature. Full article
(This article belongs to the Special Issue Machine Learning-Aided Medical Image Analysis)
Show Figures

Graphical abstract

17 pages, 4687 KiB  
Article
Research on LSTM-Based Maneuvering Motion Prediction for USVs
by Rong Guo, Yunsheng Mao, Zuquan Xiang, Le Hao, Dingkun Wu and Lifei Song
J. Mar. Sci. Eng. 2024, 12(9), 1661; https://doi.org/10.3390/jmse12091661 - 16 Sep 2024
Viewed by 964
Abstract
Maneuvering motion prediction is central to the control and operation of ships, and the application of machine learning algorithms in this field is increasingly prevalent. However, challenges such as extensive training time, complex parameter tuning processes, and heavy reliance on mathematical models pose [...] Read more.
Maneuvering motion prediction is central to the control and operation of ships, and the application of machine learning algorithms in this field is increasingly prevalent. However, challenges such as extensive training time, complex parameter tuning processes, and heavy reliance on mathematical models pose substantial obstacles to their application. To address these challenges, this paper proposes an LSTM-based modeling algorithm. First, a maneuvering motion model based on a real USV model was constructed, and typical operating conditions were simulated to obtain data. The Ornstein–Uhlenbeck process and the Hidden Markov Model were applied to the simulation data to generate noise and random data loss, respectively, thereby constructing a sample set that reflects real experiment characteristics. The sample data were then pre-processed for training, employing the MaxAbsScaler strategy for data normalization, Kalman filtering and RRF for data smoothing and noise reduction, and Lagrange interpolation for data resampling to enhance the robustness of the training data. Subsequently, based on the USV maneuvering motion model, an LSTM-based black-box motion prediction model was established. An in-depth comparative analysis and discussion of the model’s network structure and parameters were conducted, followed by the training of the ship maneuvering motion model using the optimized LSTM model. Generalization tests were then performed on a generalization set under Zigzag and turning conditions to validate the accuracy and generalization performance of the prediction model. Full article
Show Figures

Figure 1

22 pages, 13810 KiB  
Article
An Underwater Stereo Matching Method: Exploiting Segment-Based Method Traits without Specific Segment Operations
by Xinlin Xu, Huiping Xu, Lianjiang Ma, Kelin Sun and Jingchuan Yang
J. Mar. Sci. Eng. 2024, 12(9), 1599; https://doi.org/10.3390/jmse12091599 - 10 Sep 2024
Viewed by 1098
Abstract
Stereo matching technology, enabling the acquisition of three-dimensional data, holds profound implications for marine engineering. In underwater images, irregular object surfaces and the absence of texture information make it difficult for stereo matching algorithms that rely on discrete disparity values to accurately capture [...] Read more.
Stereo matching technology, enabling the acquisition of three-dimensional data, holds profound implications for marine engineering. In underwater images, irregular object surfaces and the absence of texture information make it difficult for stereo matching algorithms that rely on discrete disparity values to accurately capture the 3D details of underwater targets. This paper proposes a stereo method based on an energy function of Markov random field (MRF) with 3D labels to fit the inclined plane of underwater objects. Through the integration of a cross-based patch alignment approach with two label optimization stages, the proposed method demonstrates features akin to segment-based stereo matching methods, enabling it to handle images with sparse textures effectively. Through experiments conducted on both simulated UW-Middlebury datasets and real deteriorated underwater images, our method demonstrates superiority compared to classical or state-of-the-art methods by analyzing the acquired disparity maps and observing the three-dimensional reconstruction of the underwater target. Full article
(This article belongs to the Special Issue Underwater Observation Technology in Marine Environment)
Show Figures

Figure 1

25 pages, 94594 KiB  
Article
Harbor Detection in Polarimetric SAR Images Based on Context Features and Reflection Symmetry
by Chun Liu, Jie Gao, Shichong Liu, Chao Li, Yongchao Cheng, Yi Luo and Jian Yang
Remote Sens. 2024, 16(16), 3079; https://doi.org/10.3390/rs16163079 - 21 Aug 2024
Viewed by 848
Abstract
The detection of harbors presents difficulties related to their diverse sizes, varying morphology and scattering, and complex backgrounds. To avoid the extraction of unstable geometric features, in this paper, we propose an unsupervised harbor detection method for polarimetric SAR images using context features [...] Read more.
The detection of harbors presents difficulties related to their diverse sizes, varying morphology and scattering, and complex backgrounds. To avoid the extraction of unstable geometric features, in this paper, we propose an unsupervised harbor detection method for polarimetric SAR images using context features and polarimetric reflection symmetry. First, the image is segmented into three region types, i.e., water low-scattering regions, strong-scattering urban regions, and other regions, based on a multi-region Markov random field (MRF) segmentation method. Second, by leveraging the fact that harbors are surrounded by water on one side and a large number of buildings on the other, the coastal narrow-band area is extracted from the low-scattering regions, and the harbor regions of interest (ROIs) are determined by extracting the strong-scattering regions from the narrow-band area. Finally, by using the scattering reflection asymmetry of harbor buildings, harbors are identified based on the global threshold segmentation of the horizontal, vertical, and circular co- and cross-polarization correlation powers of the extracted ROIs. The effectiveness of the proposed method was validated with experiments on RADARSAT-2 quad-polarization images of Zhanjiang, Fuzhou, Lingshui, and Dalian, China; San Francisco, USA; and Singapore. The proposed method had high detection rates and low false detection rates in the complex coastal environment scenarios studied, far outperforming the traditional spatial harbor detection method considered for comparison. Full article
Show Figures

Figure 1

27 pages, 3403 KiB  
Review
Trajectory Analysis in Single-Particle Tracking: From Mean Squared Displacement to Machine Learning Approaches
by Chiara Schirripa Spagnolo and Stefano Luin
Int. J. Mol. Sci. 2024, 25(16), 8660; https://doi.org/10.3390/ijms25168660 - 8 Aug 2024
Viewed by 2653
Abstract
Single-particle tracking is a powerful technique to investigate the motion of molecules or particles. Here, we review the methods for analyzing the reconstructed trajectories, a fundamental step for deciphering the underlying mechanisms driving the motion. First, we review the traditional analysis based on [...] Read more.
Single-particle tracking is a powerful technique to investigate the motion of molecules or particles. Here, we review the methods for analyzing the reconstructed trajectories, a fundamental step for deciphering the underlying mechanisms driving the motion. First, we review the traditional analysis based on the mean squared displacement (MSD), highlighting the sometimes-neglected factors potentially affecting the accuracy of the results. We then report methods that exploit the distribution of parameters other than displacements, e.g., angles, velocities, and times and probabilities of reaching a target, discussing how they are more sensitive in characterizing heterogeneities and transient behaviors masked in the MSD analysis. Hidden Markov Models are also used for this purpose, and these allow for the identification of different states, their populations and the switching kinetics. Finally, we discuss a rapidly expanding field—trajectory analysis based on machine learning. Various approaches, from random forest to deep learning, are used to classify trajectory motions, which can be identified by motion models or by model-free sets of trajectory features, either previously defined or automatically identified by the algorithms. We also review free software available for some of the analysis methods. We emphasize that approaches based on a combination of the different methods, including classical statistics and machine learning, may be the way to obtain the most informative and accurate results. Full article
(This article belongs to the Special Issue Single Molecule Tracking and Dynamics)
Show Figures

Figure 1

12 pages, 1657 KiB  
Article
Developing Theoretical Models for Atherosclerotic Lesions: A Methodological Approach Using Interdisciplinary Insights
by Amun G. Hofmann
Viewed by 963
Abstract
Atherosclerosis, a leading cause of cardiovascular disease, necessitates advanced and innovative modeling techniques to better understand and predict plaque dynamics. The present work presents two distinct hypothetical models inspired by different research fields: the logistic map from chaos theory and Markov models from [...] Read more.
Atherosclerosis, a leading cause of cardiovascular disease, necessitates advanced and innovative modeling techniques to better understand and predict plaque dynamics. The present work presents two distinct hypothetical models inspired by different research fields: the logistic map from chaos theory and Markov models from stochastic processes. The logistic map effectively models the nonlinear progression and sudden changes in plaque stability, reflecting the chaotic nature of atherosclerotic events. In contrast, Markov models, including traditional Markov chains, spatial Markov models, and Markov random fields, provide a probabilistic framework to assess plaque stability and transitions. Spatial Markov models, visualized through heatmaps, highlight the spatial distribution of transition probabilities, emphasizing local interactions and dependencies. Markov random fields incorporate complex spatial interactions, inspired by advances in physics and computational biology, but present challenges in parameter estimation and computational complexity. While these hypothetical models offer promising insights, they require rigorous validation with real-world data to confirm their accuracy and applicability. This study underscores the importance of interdisciplinary approaches in developing theoretical models for atherosclerotic plaques. Full article
(This article belongs to the Special Issue Microvascular Dynamics: Insights and Applications)
Show Figures

Figure 1

20 pages, 16972 KiB  
Article
Sideband Vibro-Acoustics Suppression and Numerical Prediction of Permanent Magnet Synchronous Motor Based on Markov Chain Random Carrier Frequency Modulation
by Yong Chen, Bingxiao Yan, Liming Zhang, Kefu Yao and Xue Jiang
Appl. Sci. 2024, 14(11), 4808; https://doi.org/10.3390/app14114808 - 2 Jun 2024
Cited by 1 | Viewed by 783
Abstract
This paper presents a Markov chain random carrier frequency modulation (MRCFM) technique for suppressing sideband vibro-acoustic responses caused by discontinuous pulse-width modulation (DPWM) in permanent magnet synchronous motors (PMSMs) for new energy vehicles. Firstly, the spectral and order distributions of the sideband current [...] Read more.
This paper presents a Markov chain random carrier frequency modulation (MRCFM) technique for suppressing sideband vibro-acoustic responses caused by discontinuous pulse-width modulation (DPWM) in permanent magnet synchronous motors (PMSMs) for new energy vehicles. Firstly, the spectral and order distributions of the sideband current harmonics and radial electromagnetic forces introduced by DPWM are characterized and identified. Then, the principle and implementation method of three-state Markov chain random number generation are proposed, and particle swarm optimization (PSO) algorithm is chosen to quickly find the key parameters of transition probability and random gain. A Simulink and JMAG multi-physics field co-simulation model is built to simulate and predict the suppression effect of the MRCFM method on the sideband vibro-acoustic response. Finally, a 12-slot-10-pole PMSM test platform is built for experimental testing. The results show that the sideband current harmonics and vibro-acoustic response are effectively suppressed after the optimization of Markov chain algorithm. The constructed multi-physics field co-simulation model can accurately predict the amplitude characteristics of the sideband current harmonics and vibro-acoustic response. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

Back to TopTop