Next Article in Journal
TG-PGAT: An AIS Data-Driven Dynamic Spatiotemporal Prediction Model for Ship Traffic Flow in the Port
Next Article in Special Issue
A Low-Cost Communication-Based Autonomous Underwater Vehicle Positioning System
Previous Article in Journal
Risk Assessment of Polar Drillship Operations Based on Bayesian Networks
Previous Article in Special Issue
Graph Matching for Underwater Simultaneous Localization and Mapping Using Multibeam Sonar Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater Gyros Denoising Net (UGDN): A Learning-Based Gyros Denoising Method for Underwater Navigation

1
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
2
College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030000, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(10), 1874; https://doi.org/10.3390/jmse12101874
Submission received: 13 August 2024 / Revised: 13 October 2024 / Accepted: 16 October 2024 / Published: 18 October 2024
(This article belongs to the Special Issue Autonomous Marine Vehicle Operations—2nd Edition)

Abstract

:
Autonomous Underwater Vehicles (AUVs) are widely used for hydrological monitoring, underwater exploration, and geological surveys. However, AUVs face limitations in underwater navigation due to the high costs associated with Strapdown Inertial Navigation System (SINS) and Doppler Velocity Log (DVL), hindering the development of low-cost vehicles. Micro Electro Mechanical System Inertial Measurement Units (MEMS IMUs) are widely used in industry due to their low cost and can output acceleration and angular velocity, making them suitable as an Attitude Heading Reference System (AHRS) for low-cost vehicles. However, poorly calibrated MEMS IMUs provide an inaccurate angular velocity, leading to rapid drift in orientation. In underwater environments where AUVs cannot use GPS for position correction, this drift can have severe consequences. To address this issue, this paper proposes Underwater Gyros Denoising Net (UGDN), a method based on dilated convolutions and LSTM that learns and extracts the spatiotemporal features of IMU sequences to dynamically compensate for the gyroscope’s angular velocity measurements, reducing attitude and heading errors. In the experimental section of this paper, we deployed this method on a dataset collected from field trials and achieved significant results. The experimental results show that the accuracy of MEMS IMU data denoised by UGDN approaches that of fiber-optic SINS, and when integrated with DVL, it can serve as a low-cost underwater navigation solution.

1. Introduction

The advancement of science and technology has led to increased resource consumption and a growing scarcity of land resources, making the resource-rich oceans the target of competition among nations [1]. AUVs, with their outstanding underwater operational capabilities, have become one of the most widely used devices in ocean exploration [2].
Since the beginning of the new century, with the acceleration of ocean development, underwater robots, particularly AUVs, have entered the public eye. The excellent underwater mobility and high level of autonomous control of AUVs have significantly enhanced human capabilities in exploring and developing the deep sea, showing great potential in marine exploration, surveying, and mapping [3]. Precise navigation of AUVs is a prerequisite for performing various underwater missions, and its accuracy directly affects the effectiveness of underwater detection, especially in hydrographic data collection and seabed mapping. Due to the inability to obtain GNSS signals underwater, AUVs cannot correct their position in real-time [4]; thus, inertial navigation systems have been widely applied in the field of underwater navigation. Currently, the intergration of SINS and DVL remains the mainstream solution in underwater navigation. Although the cooperative navigation of inertial and acoustic velocity measuring devices generally meets requirements and keeps navigation errors within a controllable range, its limitation lies in the dependency of navigation accuracy on the precision of the equipment. The high cost of SINS and DVL limits the application and data collection of small AUVs [5]. Specifically, the cost of fiber optic inertial navigation systems can reach tens of thousands of dollars, making it unaffordable for small teams. For AUVs that cannot frequently surface to obtain GNSS signals, acoustic navigation is a suitable alternative. In fact, numerous studies have been conducted by scholars based on USBL/LBL to correct the cumulative errors of dead reckoning, but this method inherently relies on external sensors, which undoubtedly increases costs. Additionally, the USBL/LBL base stations need to be pre-deployed, restricting the activity range of AUVs and making them unsuitable for long-distance missions [6].
In recent years, Visual-Inertial Navigation Systems (VINS) have demonstrated significant potential in applications such as autonomous driving and drone navigation, providing accurate and robust navigation solutions [7]. However, despite the many advantages of VINS, its application in AUV navigation faces substantial limitations in underwater environments characterized by weak textures and low lighting conditions [8]. Additionally, underwater SLAM (Simultaneous Localization and Mapping) technology based on imaging sonar has also made significant progress, but so far, most SLAM algorithms designed for AUVs remain prohibitively expensive, making the real-time requirements for practical implementation in AUVs quite challenging [9,10].
MEMS inertial sensors have dominated the market due to their low cost and relatively reliable performance. Thanks to advanced MEMS manufacturing processes, these sensors can achieve precise measurements and have significant advantages in terms of size, weight, and power consumption. However, the accuracy of IMUs is susceptible to calibration parameters, including scale factors and axis misalignment [11,12]. Inaccurate calibration can lead to imprecise measurements of angular velocity and linear acceleration, causing errors to accumulate rapidly during integration, which could lead to severe consequences [13].
To limit IMU error drift and address the issue of cumulative error fundamentally, scholars have conducted extensive research. By improving traditional calibration methods, such as gravity-based accelerometer correction and continuous rotation-based gyroscope correction, Rohac proposed a calibration method based on a sensor error model for accurately correcting MEMS sensor errors [12]. Additionally, by extending the open-source Kalibr toolkit [14], Rehder and colleagues achieved simultaneous calibration of multiple IMU parameters [15]. Over the past decade, with the tremendous success of neural networks in various fields, many researchers have started applying them to dead reckoning [16], indoor pedestrian navigation [17], gyroscope noise reduction [18], and attitude estimation [19]. Despite the relatively shallow layers of neural networks designed in these methods, the results obtained have been remarkably effective [20]. Additionally, most of these studies focus on datasets collected by UAVs/rovers, and have not yet conducted research related to surface or underwater environments.
In this paper, we propose a lightweight network based on CNN–LSTM for the calibration of low-cost IMU gyroscopes. This network is based on the mathematical model of gyroscope calibration and extracts spatiotemporal features from historical IMU sequences to dynamically calibrate IMU measurements. By using the calibrated IMU measurements, we can achieve orientation estimation accuracy approaching that of fiber-optic SINS. Integrated with DVL (instruments that use the Doppler effect to measure underwater velocity), this method provides a new approach for low-cost underwater navigation solutions. The methodology framework can be found in Figure 1.
Our contributions can be summarized as follows:
  • We propose a lightweight CNN–LSTM model based on dilated convolutions to dynamically compensate for IMU measurement errors through learning.
  • We introduce a gyroscope calibration matrix (including additive and multiplicative noise) to construct a training and calibration framework, optimizing the calibration matrix and hyperparameters within the network through learning.
  • We conduct qualitative and quantitative evaluations of the proposed method using a self-made dataset. Current research mostly focuses on vehicles and drones, to fill the gap in waterborne or underwater navigation systems, we created a waterway navigation dataset using a motorboat and the experimental results show that the denoised MEMS IMU data achieve accuracy close to that of fiber-optic SINS. When integrated with DVL, this method provides a reliable reference for low-cost underwater navigation solutions.
This paper will be organized as follows: Section 2 introduces related work, Section 3 presents the mathematical theory and our method, Section 4 qualitatively and quantitatively tests the proposed method through experiments, and Section 5 concludes with future works.

2. Related Works

To improve the accuracy of IMUs, their parameters need to be calibrated. Tedaldi et al. [21] proposed an IMU calibration method that does not require external equipment. This method involves placing the IMU in different static orientations at multiple positions. To avoid unobservability in the estimation of the calibration parameters, data from at least nine different orientations of the IMU must be collected. The more orientations used, the more accurate the calibration results. Cheuk et al. [22] proposed an automatic IMU calibration method that obtains the IMU’s scale factors, biases, and angle alignment errors by rotating all axes of the IMU and holding it stationary in 12 positions. Zhang et al. [23] proposed an IMU calibration method based on a three-dimensional turntable, which not only removes biases but also focuses on correcting the angular correlations between different sensors. To address on-site IMU calibration issues, Qureshi [24] proposed a method using the Earth’s gravitational field as a reference. This method does not require external equipment and only involves a few simple rotations of the IMU within 20 min. These methods often rely on mathematical models and cannot achieve online calibration, making it difficult to meet the needs for dynamic calibration.
Aimed at VINS, Furgale et al. proposed a method that calculates the calibration parameters of the IMU offline by pre-calibrating the external parameters between the camera and the IMU, known as the Kalibr library [14]. This method has been widely used in visual–inertial odometry and was extended in 2016 to achieve axis calibration for multiple IMUs [15]. For visual–inertial odometry, Qin et al. proposed an online calibration method for dynamically optimizing model parameters [25], and also introduced VINS-Mono, which is one of the most advanced visual–inertial odometry systems [26]. However, such methods rely on additional equipment, and due to the difficulty of obtaining clear optical images under low-light conditions underwater, these methods are challenging to apply in underwater navigation.
Additionally, given the tremendous success of deep learning in various fields, researchers have also started to focus on using deep learning techniques to calibrate IMU errors. Herath et al. proposed RoNIN, which uses three network architectures based on ResNet, LSTM, and TCN to regress true velocities [27]. Chen et al. introduced IONet, a network structure based on LSTM that learns positional transformations in polar coordinates from raw IMU data and constructs an inertial odometry system, reducing errors by segmenting inertial data into independent windows [28]. Esfahani et al. proposed OriNet, a deep learning framework that achieves accurate orientation estimation for drones by learning IMU sequences [29]. Liu et al. proposed TLIO, which combines ResNet with EKF to predict displacement and uncertainty in error propagation by learning from data collected by head-mounted IMUs [30]. Nobre et al. introduced a reinforcement learning-based framework that models the IMU calibration process as a Markov decision process and uses reinforcement learning to regress optimal calibration parameters [31]. Brossard et al. proposed a method based on dilated convolutions [18], using only five layers of dilated convolutions to extract spatiotemporal features from past IMU sequences to regress the true values of orientation. This method has been validated on datasets such as EuRoC [32] and performs comparably to top visual–inertial odometry systems like OpenVINS [33].
Russo et al. developed an intelligent deep denoising autoencoder to enhance Kalman filter outputs, with its key advantage being a comprehensive noise compensation model that eliminates the need to handle each influencing factor separately [34]. Di Ciaccio et al. introduced DOES, a deep-learning-based method tailored for maritime navigation, designed to improve roll and pitch estimations from conventional AHRS [35]. Zhang et al. proposed a multi-learning fusion model for denoising low-cost IMU data, integrating convolutional autoencoders, LSTM, and Transformer multi-head attention mechanisms, which is roven to be more effective than traditional signal processing techniques for the complex motion characteristics of ships [36]. Wang et al. presented a denoising approach combining the nonlinear generalization of SVM with the multi-resolution capabilities of wavelet transform, merging uncertain data from INS and GPS [37].
In summary, although research on deep-learning-based IMU calibration is still in its infancy, existing studies indicate that deep-learning-based MEMS IMU calibration can be performed online without relying on external equipment. Its application in underwater navigation is feasible, providing a new approach to reducing the cost of underwater navigation.

3. Mathematical Preliminaries and Methods

3.1. Low-Cost IMU and Modeling

IMUs typically include 3 gyroscopes and 3 accelerometers, and sometimes magnetometers to detect the Earth’s magnetic field for carrier’s heading. These sensors measure angular velocity and linear acceleration, allowing for calculating the carrier’s attitude, velocity, and position.
u n i m u = ω n i m u a n i m u = C ω n a n + b n + η n
Because it suffers from axis misalignment, scale factors, bias, temperature drift, and random walk, there is a discrepancy between IMU measurements and ground truth. Equation (1) describes the sensor measurement model of the IMU, where u n i m u represents the actual measurements from the IMU with high-frequency noise, ω n i m u denotes the actual angular velocity measurements, a n i m u denotes the actual linear acceleration measurements, b n represents the IMU measurement bias, η n denotes the measurement noise, and C is the intrinsic calibration matrix of the IMU, which includes correction parameters for the effects of axis misalignment and scale factors on IMU measurements. ω n and a n represent the true angular velocity and linear acceleration, respectively.
According to [12,15,18], the matrix C in Equation (1) can be modeled as follows:
C = S ω M ω A 0 3 × 3 S a M a I 6
In which M is used to correct for axis misalignment, S is used to correct for scale factor errors, A represents the sensitivity to gravity and I 6 denotes the 6 × 6 identity matrix.
R n = R n 1 exp ( θ n )
θ n = ω n i m u d t
exp ( θ n ) = I + sin θ n θ n [ θ n × ] + 1 cos θ n θ n 2 θ n × 2
Equations (3)–(5) describe the attitude updates of the rigid body at each time step, where × represents the skew-symmetric operator. They calculate the angular increments based on the IMU gyroscope data and then obtain the attitude estimate through integration. Here, R SO ( 3 ) describes the angle of rotation of the rigid body in 3D space, and θ n represents the angular increment at each time step. Equation (5) provides a numerical solution for the matrix exponential mapping, i.e., the Rodrigues’ Equation.
v n = v n 1 + R n 1 a n i m u g d t
p n = p n 1 + v n 1 d t
The velocity and position is updated as Equations (6) and (7). Next, we analyze the error propagation of the gyroscope. As the IMUs suffer from axis misalignment, scale factor errors, random walk, etc., the IMUs output can be modeled as follows:
ω n i m u = ( I + S ω + Δ C ω ) ω n + b ω + η ω
a n i m u = ( I + S a + Δ C a ) a n + b a + η a
where ω n and a n are the true values of angular velocity and linear acceleration; S ω and S a are the scale factor error matrices for the gyroscope and accelerometer, respectively; C ω and C a are the mounting error matrices; b is the bias; and η represents the random walk. Thus, we can derive:
ω ^ n = I + S ω + Δ C ω 1 ω n i m u b ω + η ω = C ω ^ ω n i m u + ω ˜ n
where ω ^ n is the corrected angular velocity, matrix C represents the correction matrix for misalignment errors, scale factor errors, and other multiplicative noises, while ω ˜ n denotes the correction parameters for bias and random walk. Specifically, as misalignment errors and scale factors are constants, C is considered to be constant. In contrast, bias and random walk are time-varying, so ω ˜ n is treated as time-varying noise. Additionally, due to the high-order coupling between the gyroscope data and the body’s acceleration—meaning that acceleration measurements can also influence angular velocity [16,38]—this has a significant impact on ω ˜ n .

3.2. Simple DVL Modeling

A DVLis an instrument that measures velocity using the Doppler effect of sound waves and is widely used in underwater navigation. The Doppler effect refers to the change in frequency of a wave when the source is in relative motion to the receiver. The DVL emits sound beams at various angles into the water and receives the reflected signals from underwater objects (e.g., the seabed). If relative motion exists, the frequency of the returning sound waves changes. By analyzing the frequency shift between the transmitted and received signals, the DVL calculates its velocity relative to the water layer or seabed. Its sensor measurement model can be defined as Equation (11):
f r = f t ( 1 v b e a m c 1 ± v b e a n c )
Let f r be the received frequency, f t be the transmitted frequency, v b e a m be the beam velocity, and c be the speed of sound. As the velocity of DVL is much smaller than the speed of sound, the frequency shift Δ f can be simplified to:
Δ f 2 f t v b e a m c
The speed of beams can be defined as follows:
v b e a m = c 2 f t Δ f

3.3. Network Architecture

In our method, the network model is divided into two parts: The first part consists of a single dilated convolution layer, which takes the gyroscope data as input, and it is responsible for outputting C ω ^ ω n i m u . The second part is composed of CNN and LSTM, which is aimed at the ω ˜ n . The network architecture is illustrated in Figure 2.
As described by Equation (14), the network utilizes IMU data from the past N moments to predict time-varying errors ω ˜ n , where f ( · ) is a nonlinear function defined by the neural network, IMU t is the output of the IMU at time t , and N is the size of the sliding window.
ω ˜ n = f IMU t N , , IMU t
Our network consists of 4 layers of dilated convolution and 2 layers of LSTM. As a network structure, it combines the advantages of CNN and LSTM, enabling it to capture both local spatial features and temporal relationships simultaneously. This makes it suitable for processing time series data and achieving good modeling results. It has demonstrated an excellent performance in tasks involving the processing of temporal information, such as natural language processing [39]. In our network, data are first fed into the dilated convolution layers for feature extraction, with each layer sequentially containing a Conv1d layer, a BatchNorm layer, a GELU activation function, and Dropout to accelerate convergence. Subsequently, the sequence is further transmitted to the LSTM layers to fully extract its temporal features, and finally, the prediction results are given. The network structure is shown in Figure 3 below. The table provides detailed configurations.
Table 1 provides the detailed configuration of the network. Specifically, in our method, the size of sliding window is defined as follows:
W size = max ( kernel dim × dilation gap )
so the window size is solved as 112, thus, the network utilizes up to 112 past sequences (i.e., data from the past 1.12 s) for learning, rather than using future data.

3.4. Loss Function

In our method, the relative rotation at each moment are used as the optimization variables for loss function. The training loss for each epoch is calculated by the Huber function [40,41].
L Huber ( y , f ( x ) ) = 1 2 ( y f ( x ) ) 2 , | y f ( x ) | δ δ y f ( x ) 1 2 δ 2 , | y f ( x ) | > δ

3.5. Development Environment

Our method is implemented based on PyTorch, using the ADAM optimizer [42] for network training. The learning rate is set to 0.01, with a weight decay of 0.1. The network training employs cosine warming restarts scheduler. The weight parameter for the logarithmic cosine loss is set to 1 × 10 6 . The IMU operates at 100 Hz, so the Δt is set to 10 ms. We validated the algorithm using our own collected dataset, and it consists of 33 sequences. We select 9 sequences with diverse motion patterns as the test set. The remaining sequences are used for training, with data from 0 to 160 s of each sequence included, and validation. We trained for 1800 epochs, and it took approximately 270 s on our workstation (detailed parameters are shown in Table 2).

4. Experimental Evaluation

This section will focus on the collection and data processing of the dataset, and using this dataset for qualitative and quantitative analysis of our method. In addition, we will compare the performance of our method with other learning-based algorithms in attitude estimation.

4.1. Data Collection and Preprocessing

We used the motorboat (shown in Figure 4) as the data collection platform. This platform is equipped with high-precision SINS, RTK GNSS, and DVL. To ensure a diverse range of data types and a representative sample set, we collected trajectories under different times and weather conditions through various navigation tasks, including circular trajectories, irregular maneuvers, and sharp turns.
Our data collection system includes the following sensors:
  • A fiber optic strapdown inertial navigation system, including high-precision optical fiber gyroscopes and quartz flexible accelerometers, GNSS, and RTK positioning systems, providing high-frequency ground truth data (100 Hz) according to the NMEA-0183 protocol.
  • A low-precision MEMS inertial navigation system, including MEMS gyroscopes and accelerometers, and RTK GNSS, providing training data (100 Hz).
  • A Pathfinder RDI 600k DVL, providing low-frequency bottom speed information (5 Hz).
Detailed parameters can be found in Table 3.
Two INS units were installed in the cabin of the boat and synchronized using precise time provided by GPS. The DVL was mounted on the left side of the boat at a depth of 1 m and synchronized with the INS via 1pps. During the experiment, the boat traveled at a constant speed of 6 knots. Experiments were conducted in a natural canyon in northwest China (shown in Figure 5 and Figure 6) in April 2024 and June 2024 (both experiments were conducted on sunny days, with calm lake surfaces or occasional light breezes), lasting a total of 10 h and collecting approximately 30 km of data.
Finally, the raw data were processed into KITTI format [43] and augmented with bottom speed and bottom height information provided by the DVL to facilitate the subsequent processing and enhance interpretability. Additionally, we visualized the trajectory by integrating the collected angular velocity, linear acceleration, and DVL bottom speed and compared it with the precise positioning provided by RTK GNSS to verify the accuracy of the data samples.

4.2. Evaluation Metrics

To quantitatively validate the effectiveness of the method, we use the following metrics to evaluate the proposed approach, which are based on the methods proposed in the EVO toolbox [44,45].

4.2.1. Absolute Orientation Error

The absolute orientation error (AOE) is defined as follows:
AOE = n = 1 M 1 M log R n T R ^ n 2 2
AOE represents the relative rotational change between the estimated pose and the ground truth within each time interval. In our experiments, we calculate the AOE for each time interval and then average these values to obtain the AOE for the entire trajectory, where M means the length of the trajectory.

4.2.2. Absolute Positioning Error

The absolute positioning error (APE) is defined as follows:
APE = 1 M n = 1 M p i p ^ i 2
APE represents the difference between the estimated position and the ground truth, calculated as the RMSE (Root Mean Square Error).

4.2.3. Relative Positioning Error

The relative positioning error (RPE) is defined as follows:
RPE = 1 M n = 1 M p i + Δ t p i ( p ^ i + Δ t p ^ i ) 2
RPE represents the difference between the incremental changes in position estimates and the ground truth increments within each time interval Δt, calculated as RMSE. In our experiments, Δt is set to 20 s.

4.3. Performance

We compare the following methods:
  • Raw MEMS, that is the orientation intergated from the uncalibrated IMU data;
  • Gyros-net [18], which is only based on dilated convolution, set C ω ^ and ω ˜ n as the objects of learning, where ω ˜ n is provided by the neural network, and whose training and test sets correspond to ours;
  • Mahony [46], a classical nonlinear observer used for attitude estimation, which is quite effective in estimating gyroscope bias;
  • Proposed method, the method demonstrated in Section 3.
Table 4 shows the AOE results. Our method achieved the best performance in predicting RPY (roll, pitch, and yaw).
To illustrate the differences more clearly, we selected several trajectories and plotted the orientation estimation and estimation error curves in Figure 7. The black line represents the ground truth (RPY), the red line represents the raw IMU output, the gold line represents our method, the turquoise line represents Gyros-Net [16], and the purple line represents the mahony. For all methods, we used integration to compute the orientation. As shown in the figure, due to uncalibrated and severe noise, the error of the raw IMU output accumulates rapidly, causing significant deviations from the ground truth. Our method has the smallest deviation from the ground truth, achieving the best results.

4.4. Navigation Experiment

To demonstrate the potential of our method in low-cost underwater navigation, we conducted a series of cooperative navigation experiments, selecting four trajectories with diverse motion patterns. These trajectories cover different navigation conditions to comprehensively test the adaptability and accuracy of our method.
In the experiments, we used a DVL to collect velocity data along these trajectories. We integrated the DVL velocity data with the IMU data denoised by our method to evaluate its performance in providing accurate navigation data and compared it with the traditional SINS–DVL navigation system.
Preliminary results is shown in Figure 8, We are pleasantly surprised to find that, our method performs quite comparably to the SINS–DVL integrated navigation system. This suggests that our method has significant potential for low-cost underwater navigation while significantly reducing equipment costs.
Table 5 and Table 6 show the performance of several methods in terms of APE and RPE. We observe that our method performs comparably to the SINS–DVL integrated navigation system. The comparison indicates that our approach is effective in practical scenarios and provides new insights for reducing the cost of underwater navigation.

5. Conclusions and Future Works

In this paper, we propose UGDN, a CNN–LSTM based gyroscope denoising method for underwater navigation. We design a training framework based on the gyroscope’s measurement and error equations. By utilizing dilated convolution and LSTM to extract spatiotemporal features from IMU sequences, we aimed to achieve dynamic compensation for angular velocity output. We validated our method using a custom navigation dataset and compared it with other methods. The experimental results demonstrate that our method achieved a commendable calibration performance, with an AOE of 0.66 degrees in terms of yaw, an APE of 6.20 m, and an RPE of 2.75 m, approaching the performance of the SINS–DVL navigation system in DVL integrated navigation experiments. This provides an effective reference for low-cost underwater navigation. In the future, we plan to deploy the algorithm on AUVs to achieve online dynamic error compensation.

Author Contributions

Conceptualization, C.C., C.W. and F.Z.; methodology, C.C. and C.W.; validation, S.Z. and C.W.; data curation, C.C., C.W. and S.Z.; writing—original draft preparation, C.C.; writing—review and editing, C.W. and F.Z.; visualization, C.C., T.T. and L.Z.; supervision, F.Z.; project administration, F.Z.; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities (G2024KY0602).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors gratefully acknowledge the support provided by the Key Laboratory of Unmanned Underwater Transport Technology during the data collection process and the assistance of the research team members.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUVAutonomous Underwater Vehicle
SINSStrapdown Inertial Navigation System
DVLDoppler Velocity Log
MEMS IMUMicro Electro Mechanical System Inertial Measurement Unit
RTKReal-Time Kinematic
GNSSGlobal Navigation Satellite System
GPSGlobal Positioning System
INSInertial Navigation System
USBLUltra Short Baseline
LBLLong Baseline
CNNConvolutional Neural Network
LSTMLong Short-Term Memory
AHRSAttitude Heading Reference System
VINSVisual-Inertial Navigation Systems

References

  1. Paull, L.; Saeedi, S.; Seto, M.; Li, H. AUV Navigation and Localization: A Review. IEEE J. Ocean. Eng. 2014, 39, 131–149. [Google Scholar] [CrossRef]
  2. Yoerger, R.; Jakuba, M.; Bradley, M.; Bingham, B. Techniques for deep sea near bottom survey using an autonomous underwater vehicle. In Robotics Research: Results of the 12th International Symposium ISRR; Springer: Berlin/Heidelberg, Germany, 2007; pp. 416–429. [Google Scholar]
  3. Bellingham, J.G.; Rajan, K. Robotics in remote and hostile environments. Science 2007, 318, 1098–1102. [Google Scholar] [CrossRef]
  4. Yan, J.; Ban, H.; Luo, X.; Zhao, H.; Guan, X. Joint Localization and Tracking Design for AUV With Asynchronous Clocks and State 448 Disturbances. IEEE Trans. Veh. Technol. 2019, 68, 4707–4720. [Google Scholar] [CrossRef]
  5. Su, R.; Zhang, D.; Li, C.; Gong, Z.; Venkatesan, R.; Jiang, F. Localization and Data Collection in AUV-Aided Underwater Sensor Networks: Challenges and Opportunities. IEEE Netw. 2019, 33, 86–93. [Google Scholar] [CrossRef]
  6. Zhang, B.; Ji, D.; Liu, S.; Zhu, X.; Xu, W. Autonomous underwater vehicle navigation: A review. Ocean Eng. 2023, 273, 113861. [Google Scholar] [CrossRef]
  7. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef]
  8. Manzanilla, A.; Reyes, S.; Garcia, M.; Mercado, D.; Lozano, R. Autonomous navigation for unmanned underwater vehicles: Real-time experiments using computer vision. IEEE Robot. Autom. Lett. 2019, 4, 1351–1356. [Google Scholar] [CrossRef]
  9. Mallios, A.; Ridao, P.; Ribas, D.; Hernández, E. Scan matching SLAM in underwater environments. Auton. Robot. 2014, 36, 181–198. [Google Scholar] [CrossRef]
  10. Fallon, M.F.; Kaess, M.; Johannsson, H.; Leonard, J.J. Efficient AUV navigation fusing acoustic ranging and side-scan sonar. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
  11. Kozlov, A.; Kapralov, F. Millimeter-level calibration of IMU size effect and its compensation in navigation grade systems. In Proceedings of the DGON Inertial Sensors and Systems, Braunschweig, Germany, 30 December 2019. [Google Scholar]
  12. Rohac, J.; Sipos, M.; Simanek, J. Calibration of low-cost triaxial inertial sensors. IEEE Instrum. & Meas. Mag. 2015, 18, 32–38. [Google Scholar]
  13. Chen, H.; Taha, T.M.; Chodavarapu, V.P. Towards improved inertial navigation by reducing errors using deep learning methodology. Appl. Sci. 2022, 12, 3645. [Google Scholar] [CrossRef]
  14. Furgale, P.; Rehder, J.; Siegwart, R. Unified temporal and spatial calibration for multi-sensor systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013. [Google Scholar]
  15. Rehder, J.; Nikolic, J.; Schneider, T.; Hinzmann, T.; Siegwart, R. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
  16. Brossard, M.; Barrau, A.; Bonnabel, S. AI-IMU Dead-Reckoning. IEEE Trans. Intell. Veh. 2020, 5, 585–595. [Google Scholar] [CrossRef]
  17. Hou, X.; Bergmann, J. HINNet + HeadSLAM: Robust Inertial Navigation With Machine Learning for Long-Term Stable Tracking. IEEE Sens. Lett. 2023, 7, 1–4. [Google Scholar] [CrossRef]
  18. Brossard, M.; Bonnabel, S.; Barrau, A. Denoising IMU Gyroscopes with Deep Learning for Open-Loop Attitude Estimation. IEEE Robot. Autom. Lett. 2020, 5, 4796–4803. [Google Scholar] [CrossRef]
  19. Li, H.; Chang, S.; Yao, Q.; Wan, C.; Zou, G.; Zhang, D. Robust Heading and Attitude Estimation of MEMS IMU in Magnetic Anomaly Field Using a Partially Adaptive Decoupled Extended Kalman Filter and LSTM Algorithm. IEEE Trans. Instrum. Meas. 2024, 73, 9507813. [Google Scholar] [CrossRef]
  20. Chen, C.; Pan, X. Deep Learning for Inertial Positioning: A Survey. IEEE Trans. Intell. Transp. Syst. 2024, 25, 10506–10523. [Google Scholar] [CrossRef]
  21. Tedaldi, D.; Pretto, A.; Menegatti, E. A robust and easy to implement method for IMU calibration without external equipments. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
  22. Cheuk, C.M.; Lau, T.K.; Lin, K.W.; Liu, Y. Automatic calibration for inertial measurement unit. In Proceedings of the International Conference on Control Automation Robotics and Vision, Guangzhou, China, 5–7 December 2012. [Google Scholar]
  23. Zhang, R.; Hoflinger, F.; Reind, L.M. Calibration of an IMU Using 3-D Rotation Platform. IEEE Sens. J. 2014, 14, 1778–1787. [Google Scholar] [CrossRef]
  24. Qureshi, U.; Golnaraghi, F. An Algorithm for the In-Field Calibration of a MEMS IMU. IEEE Sens. J. 2017, 22, 7479–7486. [Google Scholar] [CrossRef]
  25. Qin, T.; Shen, S. Online Temporal Calibration for Monocular Visual-Inertial Systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018. [Google Scholar]
  26. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  27. Herath, S.; Yan, H.; Furukawa, Y. RoNIN: Robust Neural Inertial Navigation in the Wild: Benchmark, Evaluations, & New Methods. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
  28. Chen, C.; Lu, X.; Markham, A.; Trigoni, N. Ionet: Learning to cure the curse of drift in inertial odometry. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  29. Esfahani, M.A.; Wang, H.; Wu, K.; Yuan, S. OriNet: Robust 3-D orientation estimation with a single particular IMU. IEEE Robot. Autom. Lett. 2019, 5, 399–406. [Google Scholar] [CrossRef]
  30. Liu, W.; Caruso, D.; Ilg, E.; Dong, J.; Mourikis, A.I.; Daniilidis, K.; Kumar, V.; Engel, J. TLIO: Tight Learned Inertial Odometry. IEEE Robot. Autom. Lett. 2020, 5, 5653–5660. [Google Scholar] [CrossRef]
  31. Nobre, F.; Heckman, C. Learning to calibrate: Reinforcement learning for guided calibration of visual–inertial rigs. Int. J. Robot. Res. 2019, 38, 1388–1402. [Google Scholar] [CrossRef]
  32. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  33. Geneva, P.; Eckenhoff, K.; Lee, W.; Yang, Y.; Huang, G. OpenVINS: A Research Platform for Visual-Inertial Estimation. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020. [Google Scholar]
  34. Russo, P.; Di Ciaccio, F.; Troisi, S. DANAE++: A smart approach for denoising underwater attitude estimation. Sensors 2021, 21, 1526. [Google Scholar] [CrossRef] [PubMed]
  35. Di Ciaccio, F.; Russo, P.; Troisi, S. DOES: A Deep Learning-Based Approach to Estimate Roll and Pitch at Sea. IEEE Access 2022, 10, 29307–29321. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Li, Y.; Wang, J.; Liu, Z.; Jiang, G.; Guo, H.; Zhu, W. A hybrid data-driven and learning-based method for denoising low-cost IMU to enhance ship navigation reliability. Ocean Eng. 2024, 299, 117280. [Google Scholar] [CrossRef]
  37. Wang, Q.; Liu, S.; Zhang, B.; Zhang, C. FBLS-Based Fusion Method for Unmanned Surface Vessel Positioning Considering Denoising Algorithm. J. Mar. Sci. Eng. 2022, 10, 905. [Google Scholar] [CrossRef]
  38. Huang, F.; Wang, Z.; Xing, L.; Gao, C. A MEMS IMU Gyroscope Calibration Method Based on Deep Learning. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
  39. Giezendanner, J.; Mukherjee, R.; Purri, M.; Thomas, M.; Mauerman, M.; Islam, A.K.M.l.; Tellman, B. Inferring the past: A combined CNN-LSTM deep learning framework to fuse satellites for historical inundation mapping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  40. Huber, P.J. Robust Estimation of a Location Parameter. In Breakthroughs in Statistics: Methodology and Distribution; Kotz, S., Johnson, N.L., Eds.; Springer New York: New York, NY, USA, 1992; pp. 492–518. [Google Scholar]
  41. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  42. Kingma, D.P. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  43. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  44. evo: Python Package for the Evaluation of Odometry and SLAM. Available online: https://github.com/MichaelGrupp/evo (accessed on 12 October 2024).
  45. Zhang, Z.; Davide, S. A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018. [Google Scholar]
  46. Mahony, R.; Hamel, T.; Pflimlin, J.M. Nonlinear Complementary Filters on the Special Orthogonal Group. IEEE Trans. Autom. Control 2008, 53, 1203–1218. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed method.The network denoises the raw IMU data to provide real-time compensation for the gyroscope output. During training, the network output is used to obtain the rotation increments for loss function. During testing, the denoised data are used as the input for the intergration to estimate the orientation, which is then integrated with DVL measurements to estimate the position of the navigation system.
Figure 1. Overview of the proposed method.The network denoises the raw IMU data to provide real-time compensation for the gyroscope output. During training, the network output is used to obtain the rotation increments for loss function. During testing, the denoised data are used as the input for the intergration to estimate the orientation, which is then integrated with DVL measurements to estimate the position of the navigation system.
Jmse 12 01874 g001
Figure 2. Network Architecture.
Figure 2. Network Architecture.
Jmse 12 01874 g002
Figure 3. Proposed dilated CNN−LSTM model structure for ω ˜ n .
Figure 3. Proposed dilated CNN−LSTM model structure for ω ˜ n .
Jmse 12 01874 g003
Figure 4. Data collecting platform.
Figure 4. Data collecting platform.
Jmse 12 01874 g004
Figure 5. The experimental environment. (a) satellite photo; (b) natural scene.
Figure 5. The experimental environment. (a) satellite photo; (b) natural scene.
Jmse 12 01874 g005
Figure 6. One sequence of our dataset, we set off from the dock, heading along the embankment deeper into the canyon.
Figure 6. One sequence of our dataset, we set off from the dock, heading along the embankment deeper into the canyon.
Jmse 12 01874 g006
Figure 7. The overview of Orientation Estimation and SO ( 3 ) Orientation Error. In each subplot, the left part shows a comparison between the orientation estimated by these methods and the ground truth, while the right part displays the error between the estimated values and the ground truth, where (a) seq01; (b) seq03; (c) seq08.
Figure 7. The overview of Orientation Estimation and SO ( 3 ) Orientation Error. In each subplot, the left part shows a comparison between the orientation estimated by these methods and the ground truth, while the right part displays the error between the estimated values and the ground truth, where (a) seq01; (b) seq03; (c) seq08.
Jmse 12 01874 g007aJmse 12 01874 g007b
Figure 8. Comparisons of relative error between the proposed method and others, where (a) Seq06; (b) Seq05; (c) Seq02; (d) Seq09.
Figure 8. Comparisons of relative error between the proposed method and others, where (a) Seq06; (b) Seq05; (c) Seq02; (d) Seq09.
Jmse 12 01874 g008
Table 1. The network configurations.
Table 1. The network configurations.
BlockIn/Out ChannelsKernel SizeDilation FactorNum Layers
CNN layer1(6, 16)71\
CNN layer2(16, 64)74\
CNN layer3(64, 128)716\
CNN layer4(128, 3)71\
LSTM layers(3, 3)\\2
Table 2. Development environment.
Table 2. Development environment.
BlockParameters
CPU[email protected] GHz
RAM32 G
GPUNVIDIA GeForce RTX 4080 LAPTOP (12 GB)
Deep learning frameworkPytorch1.13.1
operating systemWindows 11
Table 3. Overview of the sensor specifications and performance.
Table 3. Overview of the sensor specifications and performance.
DeviceParameterPerformanceCoordinate System
FOG SINSHeading accuracy0.05°Forward, left, up
Gyro zero deviation stability≤0.1°/h
Accelerometer zero offset stability≤20 μg
MEMS IMUHeading accuracy0.1°Forward, left, up 
Gyro zero deviation stability≤6°/h
Accelerometer zero offset stability≤20 μg
DVLFrequency614.4 kHzForward, left, up 
Velocity accuracy±0.2% ±0.2 cm/s
Altitude0.2–89 m
Table 4. Absolute orientation error (AOE) in terms of 3D orientation(roll/pitch/yaw), in degree, on the test sequences.
Table 4. Absolute orientation error (AOE) in terms of 3D orientation(roll/pitch/yaw), in degree, on the test sequences.
Proposed (R/P/Y)Gyros-Net (R/P/Y)Mahony (R/P/Y)Raw (R/P/Y)
seq010.16/0.48/0.430.23/0.66/1.302.49/1.53/29.6712.04/20.33/31.67
seq020.21/0.17/0.770.35/0.13/1.092.29/1.23/24.9729.55/19.82/17.37
seq030.15/0.35/0.340.33/0.76/1.623.53/1.88/40.6385.80/40.08/76.85
seq040.25/0.32/0.740.22/0.17/0.511.11/0.54/12.9312.61/17.42/11.88
seq050.21/0.36/0.340.44/0.16/0.700.65/1.69/19.0016.57/12.57/18.69
seq060.50/0.57/1.860.57/0.59/0.721.53/1.21/23.9414.88/11.38/32.93
seq070.71/0.72/0.150.93/0.84/0.151.53/1.16/24.2526.93/30.24/20.60
seq081.02/0.32/0.501.81/0.46/1.030.97/0.28/31.4267.74/26.01/67.70
seq090.39/0.36/0.810.41/0.60/0.603.22/1.80/21.0630.84/23.74/28.87
mean0.40/0.41/0.660.59/0.49/0.861.92/1.26/25.3232.99/22.40/34.06
The length of each test sequence varies.
Table 5. APE in terms of trajectory, in meters, on the test sequences.
Table 5. APE in terms of trajectory, in meters, on the test sequences.
SINSProposedRaw
seq062.316.91104.18
seq054.184.9528.80
seq025.505.8262.42
seq094.617.11183.64
mean4.156.2094.76
Table 6. RPE in terms of trajectory, in meters, on the test sequences.
Table 6. RPE in terms of trajectory, in meters, on the test sequences.
SINSProposedRaw
seq061.402.5856.91
seq053.273.4029.96
seq021.841.8833.25
seq092.823.1463.74
mean2.332.7545.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, C.; Wang, C.; Zhao, S.; Tan, T.; Zhao, L.; Zhang, F. Underwater Gyros Denoising Net (UGDN): A Learning-Based Gyros Denoising Method for Underwater Navigation. J. Mar. Sci. Eng. 2024, 12, 1874. https://doi.org/10.3390/jmse12101874

AMA Style

Cao C, Wang C, Zhao S, Tan T, Zhao L, Zhang F. Underwater Gyros Denoising Net (UGDN): A Learning-Based Gyros Denoising Method for Underwater Navigation. Journal of Marine Science and Engineering. 2024; 12(10):1874. https://doi.org/10.3390/jmse12101874

Chicago/Turabian Style

Cao, Chun, Can Wang, Shaoping Zhao, Tingfeng Tan, Liang Zhao, and Feihu Zhang. 2024. "Underwater Gyros Denoising Net (UGDN): A Learning-Based Gyros Denoising Method for Underwater Navigation" Journal of Marine Science and Engineering 12, no. 10: 1874. https://doi.org/10.3390/jmse12101874

APA Style

Cao, C., Wang, C., Zhao, S., Tan, T., Zhao, L., & Zhang, F. (2024). Underwater Gyros Denoising Net (UGDN): A Learning-Based Gyros Denoising Method for Underwater Navigation. Journal of Marine Science and Engineering, 12(10), 1874. https://doi.org/10.3390/jmse12101874

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop