Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,069)

Search Parameters:
Keywords = IMU

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 28562 KiB  
Article
Deep-Learning-Based Approach in Cancer-Region Assessment from HER2-SISH Breast Histopathology Whole Slide Images
by Zaka Ur Rehman, Mohammad Faizal Ahmad Fauzi, Wan Siti Halimatul Munirah Wan Ahmad, Fazly Salleh Abas, Phaik-Leng Cheah, Seow-Fan Chiew and Lai-Meng Looi
Cancers 2024, 16(22), 3794; https://doi.org/10.3390/cancers16223794 - 11 Nov 2024
Abstract
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from [...] Read more.
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between “Amplified” and “Non-Amplified” regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment. Full article
(This article belongs to the Special Issue Feature Papers in Section "Cancer Biomarkers" in 2023–2024)
Show Figures

Figure 1

26 pages, 9809 KiB  
Article
Tightly Coupled LIDAR/IMU/UWB Fusion via Resilient Factor Graph for Quadruped Robot Positioning
by Yujin Kuang, Tongfei Hu, Mujiao Ouyang, Yuan Yang and Xiaoguo Zhang
Remote Sens. 2024, 16(22), 4171; https://doi.org/10.3390/rs16224171 - 8 Nov 2024
Viewed by 360
Abstract
Continuous accurate positioning in global navigation satellite system (GNSS)-denied environments is essential for robot navigation. Significant advances have been made with light detection and ranging (LiDAR)-inertial measurement unit (IMU) techniques, especially in challenging environments with varying lighting and other complexities. However, the LiDAR/IMU [...] Read more.
Continuous accurate positioning in global navigation satellite system (GNSS)-denied environments is essential for robot navigation. Significant advances have been made with light detection and ranging (LiDAR)-inertial measurement unit (IMU) techniques, especially in challenging environments with varying lighting and other complexities. However, the LiDAR/IMU method relies on a recursive positioning principle, resulting in the gradual accumulation and dispersion of errors over time. To address these challenges, this study proposes a tightly coupled LiDAR/IMU/UWB fusion approach that integrates an ultra-wideband (UWB) positioning technique. First, a lightweight point cloud segmentation and constraint algorithm is designed to minimize elevation errors and reduce computational demands. Second, a multi-decision non-line-of-sight (NLOS) recognition module using information entropy is employed to mitigate NLOS errors. Finally, a tightly coupled framework via a resilient mechanism is proposed to achieve reliable position estimation for quadruped robots. Experimental results demonstrate that our system provides robust positioning results even in LiDAR-limited and NLOS conditions, maintaining low time costs. Full article
Show Figures

Figure 1

17 pages, 2382 KiB  
Review
A Review on the Inertial Measurement Unit Array of Microelectromechanical Systems
by Jiawei Xuan, Ting Zhu, Gao Peng, Fayou Sun and Dawei Dong
Sensors 2024, 24(22), 7140; https://doi.org/10.3390/s24227140 - 6 Nov 2024
Viewed by 322
Abstract
In recent years, microelectromechanical systems (MEMS) technology has developed rapidly, and low precision inertial devices have achieved small volume, light weight, and mass production. Under this background, array technology has emerged to achieve high precision inertial measurement under the premise of low cost. [...] Read more.
In recent years, microelectromechanical systems (MEMS) technology has developed rapidly, and low precision inertial devices have achieved small volume, light weight, and mass production. Under this background, array technology has emerged to achieve high precision inertial measurement under the premise of low cost. This paper reviews the development of MEMS inertial measurement unit (IMU) array technology. First, the different types of common inertial measurement unit arrays are introduced and the basic principles are explained. Secondly, IMU array’s development status is summarized by analyzing the research results over the years. Then, the key technologies and corresponding development status of IMU array are described, respectively, including error analysis modeling and calibration, data fusion technology, fault detection, and isolation technology. Finally, the characteristics and shortcomings of the past research results are summarized, the future research direction is discussed, and some thoughts are put forward to further improve the accuracy of the IMU array. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

17 pages, 13227 KiB  
Article
Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment
by Mengqi Wang, Zengzeng Lian, María Amparo Núñez-Andrés, Penghui Wang, Yalin Tian, Zhe Yue and Lingxiao Gu
Electronics 2024, 13(22), 4346; https://doi.org/10.3390/electronics13224346 - 6 Nov 2024
Viewed by 273
Abstract
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting [...] Read more.
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting the algorithm’s positioning accuracy. To enhance the localization accuracy of mobile robots in indoor low-light environments, this paper proposes a visual inertial odometry method (L-MSCKF) based on the multi-state constraint Kalman filter. Addressing the challenges of low-light conditions, we integrated Inertial Measurement Unit (IMU) data with stereo vision odometry. The algorithm includes an image enhancement module and a gyroscope zero-bias correction mechanism to facilitate feature matching in stereo vision odometry. We conducted tests on the EuRoC dataset and compared our method with other similar algorithms, thereby validating the effectiveness and accuracy of L-MSCKF. Full article
Show Figures

Figure 1

9 pages, 1043 KiB  
Article
Construct Validity of a Wearable Inertial Measurement Unit (IMU) in Measuring Postural Sway and the Effect of Visual Deprivation in Healthy Older Adults
by Luca Ferrari, Gianluca Bochicchio, Alberto Bottari, Alessandra Scarton, Francesco Lucertini and Silvia Pogliaghi
Biosensors 2024, 14(11), 529; https://doi.org/10.3390/bios14110529 - 1 Nov 2024
Viewed by 508
Abstract
Inertial Motor sensors (IMUs) are valid instruments for measuring postural sway but their ability to detect changes derived from visual deprivation in healthy older adults requires further investigations. We examined the validity and relationship of IMU sensor-derived postural sway measures compared to force [...] Read more.
Inertial Motor sensors (IMUs) are valid instruments for measuring postural sway but their ability to detect changes derived from visual deprivation in healthy older adults requires further investigations. We examined the validity and relationship of IMU sensor-derived postural sway measures compared to force plates for different eye conditions in healthy older adults (32 females, 33 males). We compared the relationship of the center of mass and center of pressure (CoM and CoP)-derived total length, root means square (RMS) distance, mean velocity, and 95% confidence interval ellipse area (95% CI ellipse area). In addition, we examined the relationship of the IMU sensor in discriminating between open- (EO) and closed-eye (EC) conditions compared to the force plate. A significant effect of the instruments and eye conditions was found for almost all the variables. Overall, EO and EC variables within (force plate r, from 0.38 to 0.78; IMU sensor r, from 0.36 to 0.69) as well as between (r from 0.50 to 0.88) instruments were moderately to strongly correlated. The EC:EO ratios of RMS distance and 95% CI ellipse area were not different between instruments, while there were significant differences between total length (p = 0.973) and mean velocity (p = 0.703). The ratios’ correlation coefficients between instruments ranged from moderate (r = 0.65) to strong (r = 0.87). The IMU sensor offers an affordable, valid alternative to a force plate for objective, postural sway assessment. Full article
Show Figures

Figure 1

14 pages, 10386 KiB  
Article
Utilizing Inertial Measurement Units for Detecting Dynamic Stability Variations in a Multi-Condition Gait Experiment
by Yasuhirio Akiyama, Kyogo Kazumura, Shogo Okamoto and Yoji Yamada
Sensors 2024, 24(21), 7044; https://doi.org/10.3390/s24217044 - 31 Oct 2024
Viewed by 578
Abstract
This study proposes a wearable gait assessment method using inertial measurement units (IMUs) to evaluate gait ability in daily environments. By focusing on the estimation of the margin of stability (MoS), a key kinematic stability parameter, a method using a convolutional neural network, [...] Read more.
This study proposes a wearable gait assessment method using inertial measurement units (IMUs) to evaluate gait ability in daily environments. By focusing on the estimation of the margin of stability (MoS), a key kinematic stability parameter, a method using a convolutional neural network, was developed to estimate the MoS from IMU acceleration time-series data. The relationship between MoS and other stability indices, such as the Lyapunov exponent and the multi-site time-series (MSTS) index, using data from five IMU sensors placed on various body parts was also examined. To simulate diverse gait conditions, treadmill speed was varied, and a knee–ankle–foot orthosis was used to restrict left knee extension, inducing gait asymmetry. The model achieved over 90% accuracy in classifying MoS in both forward and lateral directions using three-axis acceleration data from the IMUs. However, the correlation between MoS and the Lyapunov exponent or MSTS index was weak, suggesting that these indices may capture different aspects of gait stability. Full article
(This article belongs to the Special Issue Wearable Sensors for Postural Stability and Fall Risk Analyses)
Show Figures

Figure 1

21 pages, 2764 KiB  
Article
Dual Foot-Mounted Localisation Scheme Employing a Minimum-Distance-Constraint Kalman Filter Under Coloured Measurement Noise
by Yuan Xu, Jingwen Yu, Xiangpeng Wang, Teng Li and Mingxu Sun
Micromachines 2024, 15(11), 1346; https://doi.org/10.3390/mi15111346 - 31 Oct 2024
Viewed by 434
Abstract
This study proposes a dual foot-mounted localisation scheme with a minimum-distance-constraint (MDC) Kalman filter (KF) for human localisation under coloured measurement noise (CMN). The dual foot-mounted localisation employs inertial measurement unit (IMUs), one on each foot, and is intended for human navigation. The [...] Read more.
This study proposes a dual foot-mounted localisation scheme with a minimum-distance-constraint (MDC) Kalman filter (KF) for human localisation under coloured measurement noise (CMN). The dual foot-mounted localisation employs inertial measurement unit (IMUs), one on each foot, and is intended for human navigation. The KF under CMN (cKF) is then derived from the data-fusion model of the proposed navigation scheme. Finally, the MDC condition is designed and an MDC–cKF model is proposed to reduce the error in the IMUs. Empirical results showed that the proposed method effectively improves the navigation accuracy from that of MDC–KF, which neglects the effect of CMN. Full article
(This article belongs to the Special Issue MEMS Inertial Device, 2nd Edition)
Show Figures

Figure 1

26 pages, 1397 KiB  
Article
Inertial Measurement Unit Self-Calibration by Quantization-Aware and Memory-Parsimonious Neural Networks
by Matteo Cardoni, Danilo Pietro Pau, Kiarash Rezaei and Camilla Mura
Electronics 2024, 13(21), 4278; https://doi.org/10.3390/electronics13214278 - 31 Oct 2024
Viewed by 433
Abstract
This paper introduces a methodology to compensate inertial Micro-Electro-Mechanical System (IMU-MEMS) time-varying calibration loss, induced by stress and aging. The approach relies on a periodic assessment of the sensor through specific stimuli, producing outputs which are compared with the response of a high-precision [...] Read more.
This paper introduces a methodology to compensate inertial Micro-Electro-Mechanical System (IMU-MEMS) time-varying calibration loss, induced by stress and aging. The approach relies on a periodic assessment of the sensor through specific stimuli, producing outputs which are compared with the response of a high-precision sensor, used as ground truth. At any re-calibration iteration, differences with respect to the ground truth are approximated by quantization-aware trained tiny neural networks, allowing calibration-loss compensations. Due to the unavailability of aging IMU-MEMS datasets, a synthetic dataset has been produced, taking into account aging effects with both linear and nonlinear calibration loss. Also, field-collected data in conditions of thermal stress have been used. A model relying on Dense and 1D Convolution layers was devised and compensated for an average of 1.97 g and a variance of 1.07 g2, with only 903 represented with 16 bit parameters. The proposed model can be executed on an intelligent signal processing inertial sensor in 126.4 ms. This work represents a step forward toward in-sensor machine learning computing through integrating the computing capabilities into the sensor package that hosts the accelerometer and gyroscope sensing elements. Full article
Show Figures

Figure 1

18 pages, 7461 KiB  
Article
The Problems and Design of a Neck Dummy
by Christopher René Torres San Miguel, José Antonio Perez Valdez, Marco Ceccarelli and Matteo Russo
Biomimetics 2024, 9(11), 661; https://doi.org/10.3390/biomimetics9110661 - 31 Oct 2024
Viewed by 336
Abstract
This paper addresses the biomechanical requirements and design of a neck dummy for assessing neck injury risks. The need for an accurate biomechanical representation of the human neck in crash tests is highlighted, emphasizing the importance of replicating the neck’s response to impacts. [...] Read more.
This paper addresses the biomechanical requirements and design of a neck dummy for assessing neck injury risks. The need for an accurate biomechanical representation of the human neck in crash tests is highlighted, emphasizing the importance of replicating the neck’s response to impacts. Existing neck dummies are reviewed to assess their similarity to human neck biomechanics, revealing several limitations. To address these gaps, a novel prototype is proposed to mimic the joint between two vertebrae using elastic elements to replicate the behavior of the intervertebral disc. The performance of the neck dummy is evaluated through experimental testing, using IMU and force sensors to monitor its response to perturbations from impacts. The reported results demonstrate that the prototype effectively simulates the intervertebral movement, offering an approach for more accurate injury assessments in crash testing. Concluding remarks suggest the potential of this design to improve the reliability of neck injury assessments in automotive safety research. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

46 pages, 13038 KiB  
Review
A Review on Deep Learning for UAV Absolute Visual Localization
by Andy Couturier and Moulay A. Akhloufi
Drones 2024, 8(11), 622; https://doi.org/10.3390/drones8110622 - 29 Oct 2024
Viewed by 827
Abstract
In the past few years, the use of Unmanned Aerial Vehicles (UAVs) has expanded and now reached mainstream levels for applications such as infrastructure inspection, agriculture, transport, security, entertainment, real estate, environmental conservation, search and rescue, and even insurance. This surge in adoption [...] Read more.
In the past few years, the use of Unmanned Aerial Vehicles (UAVs) has expanded and now reached mainstream levels for applications such as infrastructure inspection, agriculture, transport, security, entertainment, real estate, environmental conservation, search and rescue, and even insurance. This surge in adoption can be attributed to the UAV ecosystem’s maturation, which has not only made these devices more accessible and cost effective but has also significantly enhanced their operational capabilities in terms of flight duration and embedded computing power. In conjunction with these developments, the research on Absolute Visual Localization (AVL) has seen a resurgence driven by the introduction of deep learning to the field. These new approaches have significantly improved localization solutions in comparison to the previous generation of approaches based on traditional computer vision feature extractors. This paper conducts an extensive review of the literature on deep learning-based methods for UAV AVL, covering significant advancements since 2019. It retraces key developments that have led to the rise in learning-based approaches and provides an in-depth analysis of related localization sources such as Inertial Measurement Units (IMUs) and Global Navigation Satellite Systems (GNSSs), highlighting their limitations and advantages for more effective integration with AVL. The paper concludes with an analysis of current challenges and proposes future research directions to guide further work in the field. Full article
Show Figures

Figure 1

12 pages, 1699 KiB  
Article
Multi-Activity Step Counting Algorithm Using Deep Learning Foot Flat Detection with an IMU Inside the Sole of a Shoe
by Quentin Lucot, Erwan Beurienne and Michel Behr
Sensors 2024, 24(21), 6927; https://doi.org/10.3390/s24216927 - 29 Oct 2024
Viewed by 419
Abstract
Step counting devices were previously shown to be efficient in a variety of applications such as athletic training or patient’s care programs. Various sensor placements and algorithms were previously experimented, with a best mean absolute percentage error (MAPE) close to 1% in simple [...] Read more.
Step counting devices were previously shown to be efficient in a variety of applications such as athletic training or patient’s care programs. Various sensor placements and algorithms were previously experimented, with a best mean absolute percentage error (MAPE) close to 1% in simple mono-activity walking conditions. In this study, an existing running shoe was first instrumented with an inertial measurement unit (IMU) and used in the context of multi-activity trials, at various speeds, and including several transition phases. A total of 21 participants with diverse profiles (gender, age, BMI, activity style) completed the trial. The data recorded was used to develop a step counting algorithm based on a deep learning approach, and further validated against a k-fold cross validation process. The results revealed that the step counts were highly correlated to gyroscopes and accelerometers norms, and secondarily to vertical acceleration. Reducing input data to only those three vectors showed a very small decrease in the prediction performance. After the fine-tuning of the algorithm, a MAPE of 0.75% was obtained. Our results show that such very high performances can be expected even in multi-activity conditions and with low computational resource needs making this approach suitable for embedded devices. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

20 pages, 15485 KiB  
Article
Integrated Navigation Method for Orchard-Dosing Robot Based on LiDAR/IMU/GNSS
by Wang Wang, Jifeng Qin, Dezhao Huang, Furui Zhang, Zhijie Liu, Zheng Wang and Fuzeng Yang
Agronomy 2024, 14(11), 2541; https://doi.org/10.3390/agronomy14112541 - 28 Oct 2024
Viewed by 408
Abstract
To enhance the localization reliability and obstacle avoidance performance of the dosing robot in complex orchards, this study proposed an integrated navigation method using LiDAR, IMU, and GNSS. Firstly, the tightly coupled LIO-SAM algorithm was used to construct an orchard grid map for [...] Read more.
To enhance the localization reliability and obstacle avoidance performance of the dosing robot in complex orchards, this study proposed an integrated navigation method using LiDAR, IMU, and GNSS. Firstly, the tightly coupled LIO-SAM algorithm was used to construct an orchard grid map for path planning and obstacle avoidance. Then, a global localization model based on RTK-GNSS was developed to achieve accurate and efficient initial localization of the robot’s coordinates and heading, and a Kalman filter was applied to integrate GNSS and IMU to improve robustness. Next, an improved A* algorithm was introduced to ensure the global operational path maintained a safe distance from obstacles, while the DWA algorithm handled dynamic obstacle avoidance. Field tests showed that the global localization model achieved an accuracy of 2.215 cm, with a standard deviation of 1 cm, demonstrating stable positioning performance. Moreover, the global path maintained an average safe distance of 50.75 cm from the obstacle map. And the robot exhibited a maximum absolute lateral deviation of 9.82 cm, with an average of 4.16 cm, while maintaining a safe distance of 1 m from dynamic obstacles. Overall, the robot demonstrated smooth and reliable autonomous navigation, successfully completing its tasks. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Figure 1

9 pages, 1409 KiB  
Communication
Physical Frailty Prediction Using Cane Usage Characteristics during Walking
by Haruki Toda and Takaaki Chin
Sensors 2024, 24(21), 6910; https://doi.org/10.3390/s24216910 - 28 Oct 2024
Viewed by 411
Abstract
This study aimed to determine the characteristics of accelerations and angular velocities obtained by an inertial measurement unit (IMU) attached to a cane between older people with and without physical frailty. Community-dwelling older people walked at a comfortable speed using a cane with [...] Read more.
This study aimed to determine the characteristics of accelerations and angular velocities obtained by an inertial measurement unit (IMU) attached to a cane between older people with and without physical frailty. Community-dwelling older people walked at a comfortable speed using a cane with a built-in IMU. Physical frailty was assessed using exercise-related items extracted from the Kihon Check List. The efficacy of five machine learning models in distinguishing older people with physical frailty was investigated. This study included 48 older people, of which 24 were frail and 24 were not. Compared with the non-frail participants, the older people with physical frailty had a small root mean square value in the vertical and anteroposterior directions and angular velocity in the anteroposterior direction (p < 0.001, r = 0.36; p < 0.001, r = 0.29; p < 0.001, r = 0.30, respectively) and a large mean power frequency value in the vertical direction (p = 0.042, r = 0.18). The decision tree model could most effectively classify physical frailty, with an accuracy, F1 score, and area under the curve of 78.6%, 91.8%, and 0.81, respectively. The characteristics of IMU-attached cane usage by older adults with physical frailty can be utilized to effectively evaluate and determine physical frailty in their usual environments. Full article
Show Figures

Figure 1

20 pages, 11540 KiB  
Article
Autonomous Landing Strategy for Micro-UAV with Mirrored Field-of-View Expansion
by Xiaoqi Cheng, Xinfeng Liang, Xiaosong Li, Zhimin Liu and Haishu Tan
Sensors 2024, 24(21), 6889; https://doi.org/10.3390/s24216889 - 27 Oct 2024
Viewed by 442
Abstract
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). [...] Read more.
Positioning and autonomous landing are key technologies for implementing autonomous flight missions across various fields in unmanned aerial vehicle (UAV) systems. This research proposes a visual positioning method based on mirrored field-of-view expansion, providing a visual-based autonomous landing strategy for quadrotor micro-UAVs (MAVs). The forward-facing camera of the MAV obtains a top view through a view transformation lens while retaining the original forward view. Subsequently, the MAV camera captures the ground landing markers in real-time, and the pose of the MAV camera relative to the landing marker is obtained through a virtual-real image conversion technique and the R-PnP pose estimation algorithm. Then, using a camera-IMU external parameter calibration method, the pose transformation relationship between the UAV camera and the MAV body IMU is determined, thereby obtaining the position of the landing marker’s center point relative to the MAV’s body coordinate system. Finally, the ground station sends guidance commands to the UAV based on the position information to execute the autonomous landing task. The indoor and outdoor landing experiments with the DJI Tello MAV demonstrate that the proposed forward-facing camera mirrored field-of-view expansion method and landing marker detection and guidance algorithm successfully enable autonomous landing with an average accuracy of 0.06 m. The results show that this strategy meets the high-precision landing requirements of MAVs. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 16144 KiB  
Article
Study of Five-Hundred-Meter Aperture Spherical Telescope Feed Cabin Time-Series Prediction Studies Based on Long Short-Term Memory–Self-Attention
by Shuai Peng, Minghui Li, Benning Song, Dongjun Yu, Yabo Luo, Qingliang Yang, Yu Feng, Kaibin Yu and Jiaxue Li
Sensors 2024, 24(21), 6857; https://doi.org/10.3390/s24216857 - 25 Oct 2024
Viewed by 398
Abstract
The Five-hundred-meter Aperture Spherical Telescope (FAST), as the world’s most sensitive single-dish radio telescope, necessitates highly accurate positioning of its feed cabin to utilize its full observational potential. Traditional positioning methods that rely on GNSS and IMU, integrated with TS devices, but the [...] Read more.
The Five-hundred-meter Aperture Spherical Telescope (FAST), as the world’s most sensitive single-dish radio telescope, necessitates highly accurate positioning of its feed cabin to utilize its full observational potential. Traditional positioning methods that rely on GNSS and IMU, integrated with TS devices, but the GNSS and TS devices are vulnerable to other signal and environmental disruptions, which can significantly diminish position accuracy and even cause observation to stop. To address these challenges, this study introduces a novel time-series prediction model that integrates Long Short-Term Memory (LSTM) networks with a Self-Attention mechanism. This model can hold the precision of feed cabin positioning when the measure devices fail. Experimental results show that our LSTM-Self-Attention model achieves a Mean Absolute Error (MAE) of less than 10 mm and a Root Mean Square Error (RMSE) of approximately 12 mm, with the errors across different axes following a near-normal distribution. This performance meets the FAST measurement precision requirement of 15 mm, a standard derived from engineering practices where measurement accuracy is set at one-third of the control accuracy, which is around 48 mm (according to the accuracy form the official threshold analysis on the focus cabin of FAST). This result not only compensates for the shortcomings of traditional methods in consistently solving feed cabin positioning, but also demonstrates the model’s ability to handle complex time-series data under specific conditions, such as sensor failures, thus providing a reliable tool for the stable operation of highly sensitive astronomical observations. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop