2006 Ijcnn
2006 Ijcnn
I. INTRODUCTION Fig. 1. Multipath scenario composed of a direct ray (Line Of Sight) and
two reflected rays from the satellite structure.
Multip
h3
GPS Sat4
high accuracy is fundamental in some LEO applications like
c
ath
ath c
ltip
rendez-vous and docking, orbit determination and formation Mu
4
Mu
h
ltip ch
2
flying. Rendez-vous and docking is the scenario in which a ath ath
c ltip
h1 Mu
chaser spacecraft must approach and dock with a target
spacecraft, in a smoothed and safe manner, for example a
space vehicle attempting to join the “International Space
Station”. High accuracy is also needed for formation flying;
where a set of satellites have to be maintained in a fixed LEO Sat.
geometrical constellation. Besides these, highly accurate Fig. 2 multipath scenario when several GPS sat exist. We can show that
measurement of position and velocity is also needed for orbit direct and reflected rays related to different satellites add up at the receiver
determination. input.
Conventional GPS receivers suffer from inaccuracy. Leak Figure 1, shows a possible scenario in which multipath
of accuracy can be a result of receiving signals from too rays are reflected from solar panels and metallic structure. In
many surrounding transmitters resulting is a very low signal- order to fight against multipath influence, we propose to
to-noise ratio, in the order of -150 dB. Due to the satellite mitigate it in the range domain using a Neural Networks
structure, an incident signal is reflected and several replicas (NN) technique. It will be shown below that this technique
of the conveyed signal are present at the receiver input. outperforms traditional mitigation methods like Narrow
Depending on the size and the structure of the LEO or space Correlators and double delta [3-4] and [8].
vehicle (from a few meters to tens of meters), the multipath This paper is organized as follows:
signal gives considerable positioning errors that have to be The second part presents the architecture of the
navigation system on-board a LEO satellite and describes the
This work was supported by the European Space Agency (ESTEC Contract position of the NN inside the system and related topics.
number 18824/05/NL/AG). Authors would thank the project team; In section III, we explain the NN architecture and Pre-
M3Systems, GMV, TeSA, ENAC and UPC for their support.
H. Abdulkader, is a Professor at the university of Aleppo (Syria), project processing procedure of the training data. Learning rules will
Researcher at TeSA laboratory (email: hasan.abdulkader @tesa.prd.fr). be given in this part.
D. Roviras, Professor at IRIT/INP-ENSEEIHT/TéSA: 2, Rue C. We present in the third section some simulation results.
Camichel,31071 Toulouse, France (email: [email protected]).
R. Chaggara, research engineer at TeSA Laboratory. Discussion of different influencing factors will be given in
W. Vigneau, head of radio navigation unit at M3S (email: this chapter like the number of correlators, IF filter
[email protected]). bandwidth, etc. While section IV concludes the paper.
Professor F. Castanié, is the Director of TéSA laboratory. TeSA :
Telecommunications for Space and Aeronautics Laboratory, 14-16 Port de
Sainte Etienne, 31000 Toulouse, France (email: francis.castanie
@tesa.prd.fr).
2674
I I& D Σ
Σ
Filters Banc Σ
Σ
carrier code
NCO C/A Code code
generator NCO loop filter
sin
Σ
Q Σ
I&
I& DD Σ
Σ
Filters
Filters Banc
Banc
Pseudo range estimate
Σ Rectified
Multicorrelator output
Σ
Pseudo range
Σ
Σ
Tracking error
estimate
Σ
Σ
Σ
Σ Fig. 5. Correlation function shape in relation with the IF filter bandwidth.
Larger bandwidth gives better correlation function.
Neural Network
In order to overcome large correlation values, a
Fig. 4. Detailed schema illustrating NN position in the GPS signal
processing part. normalization of the input vector is crucial before applying it
In [1] and [6], NN have been successfully used to identify to the NN input.
and predistort high power amplifiers. In [9], a NN is Figure (5) depicts simulated correlations after
implemented on an ASIC for use in on-board regenerative normalization. More discussion about this figure is given
satellites [9]. below.
The performance of a neural network depends on several Beside normalization, a closer look on the correlation
factors and a rigorous definition of such elements is thus function shows that major important data allowing
crucial to the design of an efficient network. Data pre- distinction between different multipath rays is concentrated
processing, synaptic weights initialization, learning rate and around τ& = 0 . Simulation results allow determining the
learning algorithm are major points that play an important
optimum choice of the correlation interval to be considered
role on the network performance. In this paper we use the
and number of samples inside the interval.
Back Propagation (BP) algorithm [5].
It is worth to mention here that NN is trained in a B. NN description
supervised manner. The NN input is the vector of correlator Though NN are powerful tools, optimality of the NN
outputs as given in equation (3). The NN will be trained to structure is still an empirical and hard task. The number of
estimate the tracking errors of code delay and carrier phase.
hidden layers the number of neurons in each layer and
Desired output signals are calculated offline.
connections between neurons are key parameters usually
Due to the high complexity in computation, training data
determined experimentally. We adopted a neural network
set is calculated offline. Learning of the NN in the on-board
GNSS receiver is also done prior to launch, i.e. the synaptic with three layers: an input layer, a hidden layer and an output
weights are calculated on Earth using a computationally layer. The size of the input and output layers are predefined
powerful computer. After the convergence of NN, i.e. when by the size of the input and desired output vectors
synaptic weights converge asymptotically to a stationary respectively. While the size of the hidden layer will be
solution, resulting synaptic weights are stored in order to be considered as a free degree which allows adjusting the
used on satellite. On-board, given as an input vector the residual mean square error. Theoretically, bigger is the size
output of the correlators, the NN executes a forward of the hidden layer, smaller is the residual error. Figure 6
computation to estimate the code and phase tracking errors. illustrates the architecture of the neural network.
A. Pre-processing The activation function of the hidden and output layers is
In the precedent section, we explained the envisaged NN a modified tanh function. This allows us to express the
and its input and output vectors. They are respectively the output of a neuron in the hidden layer by the rule:
vector of correlator outputs and the vector constituted of the N in
estimated delay tracking error and phase tracking error.
u j = a tanh b ⋅ ∑W1ij xi (4)
The correlators output are cumulated over the code i =1
ij
sequence and a high correlation occurs when conveyed code Where Nin is the size of the input layer, W1 is the synaptic
sequence coincides with the locally generated code. weight connecting input xi to neuron j in the hidden layer, uj
Theoretical autocorrelation of a code sequence has a is the output of neuron j. W1 includes The bias we
triangular shape when delay code τ& ≤ Tc and zero value consider x1 = 1 . In the same manner we can develop the
output of a neuron in the output layer:
when τ& ≥ Tc , where Tc is the chip code period.
2675
W1 u1 W2 0.1
Code Track ing Error Estimate
x1
0.09
y1 0.08
0.07
x2
Estimation Error
0.06
u2
yk 0.05
2x 64 2x48
0.04 2x32
2x 16
xk 2x 8
uk 0.03
y Nout 0.02
0.01
x N in
uNh 0
0 20 40 60 80 100 120 140 160 180 200
Iterations(x100)
0.07
Nh 0.06
y j = tanh ∑W2ij ui
2x 64
2x 48
(5) 0.05
2x 32
i =1
2x 16
0.04 2x 8
0.03
The table below gives the learning rules of synaptic weights
0.02
by the BP algorithm.
0.01
0
TABLE I 0 20 40 60 80 100 120
Iterations (x 100)
140 160 180 200
LEARNING RULES OF SYNAPTIC WEIGHTS Fig. 8. Effects of correlators number on NN convergence. Absolute error
Output layer new W2ij = old W2ij − µ ⋅ δ 2j ⋅ ui of code tracking error decreases with iterations number.
The learning set contains 20000 input samples and
δ 2j = e j ⋅ f j' constant learning rates are used. The estimation errors
(absolute value) of code and phase tracking errors are
Hidden layer new W1ij = old W1ij − µ ⋅ δ1j ⋅ xi respectively illustrated on figures (7) and (8) below:
N out Figures 7 and 8 show the evolution of the absolute errors
δ1j = f j' ∑ δ 2i ⋅ W2ji at the NN output versus iteration number. It is clear that
i =1 greater is the number of correlators, faster the BP algorithm
Where e j is the error signal at the output j, i.e. the will converge.
difference between the jth desired output and the B. Effect of correlators spacing
' The search for an optimal number of correlators is
corresponding jth neuron of the output layer. f is the
j
associated to the chip location. That’s why we will give the
derivative of the output of neuron j with respect to its own optimal combination (Number of correlator/ chip location).
argument. µ , a scalar smaller than 1, is the learning rate We considered several configurations for the simulations;
which governs the convergence speed. adopted correlator sizes are equal to 8, 16, 32, 48 and 64.
For each size, the correlator’s samples are located in a
IV. SIMULATION RESULTS symmetric interval, adopted intervals are [-Tc +Tc], [-0.5Tc
In this section we will cover different topics visited +0.5Tc] and [-0.15 Tc +0.15 Tc]. The same learning rate is
through the precedent sections. used for all configurations. Figures 9 and 10 show the
impact of the number of correlators on the training phase
A. Effect of correlators number performance for two sampling intervals respectively [-Tc
The size of the correlator output is the most important +Tc] and [-0.15Tc 0.15Tc]. The impact of the chip location
element which affects the complexity of the proposed is straightforward, sampling closer to the correlation peak
solution in the range domain. First, it affects the complexity leads to better estimation performance. For example, in the
of the receiver since the delivery of a large number of case of 48 correlators, the concentrating the samples around
outputs requires additional processing. Secondly, the size of the correlation peak outperforms the default sampling
the neural network is linked to the correlators output size interval, that’s [-Tc Tc], by a factor 2. We note also that the
because this size is equal to the neural network input size. In impact of the correlators output size is greater when the
this section, we will thus assess the impact of the input size considered samples are closer to the correlation peak.
on the neural network performance. The considered input Adopting a “peak intensive” sampling technique offers also
sizes are 8, 16, 32, 48 and 64 for each In-phase and better resistance to channel noise because samples close to
Quadrature component; the number of neurons in the hidden the correlation peak have better signal to noise ratio.
layer is set equal to the input layer size.
2676
-3
x 10 C ode T racking Error Convergence
14 C o de T rackin g E rro r C o nvergence
0 .014
C hip Location [-0.15T c , +0.15T c ]
filter b an dw id th=2 M hz
0 .012 filter b an dw id th=4 M hz
12 8 C orrelators filter b an dw id th=8 M hz
16 Correlators filter ban dw idth =20 M hz
32 Correlators
0 .008
8
σ =0.00 6 T c
0 .006
6
0 .004 σ =0 .0 01 2 T c
4 σ =0.00 24 T c
0 .002
σ =0 .000 8 T c
2 0
0 2 4 6 8 10 12 14 16 18 20 0 10 20 30 40 50 60 70 80 90 1 00
Iterations(x2500)
Iteration s(x2 500 )
Fig. 9. Influence of correlator number on MSE evolution.
Fig. 11. Rule of the filter bandwidth on tracking error convergence.
-3
x 10 1
Code Tracking Error Convergence Actual phase tracking error σ=0.20789
14 Chip Location [-T c ,T c] Estimated phase tracking error σ=0.19697
Estimation Error σ=0.075722
8 Correlators
16 Correlators 0.5
Code tracking error STD
12 32 Correlators
48 Correlators
6 -0.5
4
2 4 6 8 10 12 14 16 18 20
Iterations(x2500) -1
of correlators output. We remark also that the “peak Fig. 12. Comparison between NN output and actual data set. Training data
intensive” sampling outperforms both other sampling represents multipath related to the ISS structure.
techniques.
As a conclusion, we propose a “peak intensive” sampling
with 48 samples for each signal component (In-phase and
Quadrature).
C. Effect of filter bandwidth
Since the correlation function is discontinue (this is not true;
the high frequencies in the codes are responsible for a sharp
and clear peak in the correlation function; as filtering affects
the higher frequencies, the clear peak is rounded) at the peak
of the ideal correlation function, the region the most affected
by the front-end filtering is the region near the centre. To
assess the impact of limited filter bandwidth, simulations
Fig. 13. Comparison between NN output and actual data (generalization).
with different bandwidth were performed. The results
Only the generalization phase of the NN solutions will be
displayed in figure 11 show that the filtering process has an
performed on-board the LEO satellites; we propose to
impact on the neural network performance. As a
implement the NN in a FPGA circuit. This implementation is
consequence we should avoid extreme low-pass filtering
possible since the NN executes the forward computation
since increasing the filter bandwidth decreases the final error
only: the NN computes the tracking errors of code delay and
committed by the neural network. Concerning the phase
carrier phase given an input vector.
tracking error estimation, the same impact is observed, but
In order to validate our NN mitigation technique, we have
the gap between different values of bandwidth is less
run several simulations. We shall give below a few instances
significant. Concerning this issue, we state that a bandwidth
of obtained results. Figures below are relative to the
of 8 MHz seems to be a good trade-off and could be
International Space Station scenario, which, due to its large
proposed in our context of LEO applications.
size and many solar panels, is considered to represent an
D. Learning and generalization extreme case in terms of multipath. The NN is composed of
In a previous section we mentioned that learning of NN will 96 inputs and two output neurons and 96 hidden neurons.
be performed on Earth where powerful computers can be Training data contains 300 patterns of input and desired
used to accelerate the learning phase. output vectors, the number of correlators equals 48 and they
are grouped near the peak. The NN is trained for 1e5
iterations with a learning rate of 1e-3.
2677
In Figure 12, we show the behavior of the NN during the in Global Navigation Satellite Systems receivers on-board
learning phase: the phase tracking error fed to the NN as well LEO satellites. We studied the effect of different factors on
as the evolution of the phase tracking error estimated by the the learning procedure. In particular, we studied the effect of
NN and the estimation error are shown. the bandwidth of the receiver filter, the number of correlators
Once the NN is trained, the generalization is done using and the position of correlators within the correlation function
another set of input multipath patterns. Figure 13 shows the peak. Simulation results allowed us to optimize the NN
actual code tracking error and the code tracking error as performance in term of number of iterations and residual
estimated by the NN during the generalization phase. mean square error. Finally, we compared results obtained by
Generally, we conclude that the generalization capacities neural networks technique with classical techniques namely
of the neural network are acceptable either for code and double delta and correlation. Results show a good
phase tracking error estimation. However, to guarantee the improvement obtained by the application of a NN. NN
best possible performance the training of the neural network outperformed classical techniques in correcting the tracking
must take into account all multipath configurations. carrier phase and code delay errors due to the multipath
It is worthwhile to mention that used data is provided by effect.
GMV SA (Madrid,Spain) by mean of an in-house software
called Multipath Virtual Laboratory (MVL). MVL allows ACKNOWLEDGMENT
generating a set of multipath and calculating the
corresponding phase and code tracking errors for a given Authors would thank the project team which participates
space vehicle structure. to the success of this project. Although the work is
In the forthcoming step, NN will be implemented in a conducted in parallel within different establishment M3
specific tool developed by M3 Systems named ORUS (Open Systems, ENAC and TeSA in France, and UPC and GMV in
Receiver for Upgraded Services). ORUS will model the Spain, the coordination and availability conducted to a
GNSS receiver to be implemented on-board a LEO satellite, success. A special thanks to ESA who financed this project.
it will receive the signal from several GPS satellites, four
satellites at least. The signal of each GPS will be processed REFERENCES
in a dedicated channel. The role of the NN in each channel is [1] H. Abdulkader, F. Langlet, D. Roviras and F. Castanié, “Natural
to give estimation of the tracking errors in order to correct gradient algorithm for neural networks applied to non-linear
the code and phase estimations. high power amplifiers”, International Journal on ACSP, special
issue on Advances in Signal Processing for Mobile
E. Comparison with other techniques Communication Systems, Ed. Wiley 2002, invited paper.
[2] S. Bouchired , D. Roviras, F. Castanié, “Equalisation of satellite
In this subsection we emphasize the prominence of the mobile channels with neural network techniques,” Space
neural network as a solution for multipath mitigation in the Communications journal, IOS press, Volume 15, Number 4 /
range domain. NN techniques are compared with two known 1999.
techniques namely double delta and narrow correlator via [3] A. J. Dierendonck, P. Fenton and T. Forf, “Theory and
performance of Narrow correlator pacing in a GPS Receiver,”
long simulations. Navigation , Journal of the institute of Navigation, Vol. 39, No.
The proposed architecture offers high quality estimation 3, 1992.
when tested with a single reflected ray with a strong [4] L. Grain, F. Van Diggelen, and J.Rousseau, “Strobe and Edge
capability of estimating short pseudorange error (especially Correlator Multipath mitigation for code,” Proceeding of ION
GPS-96, Kansas City, MO, September 1996.
when compared to classical solutions as narrow spacing [5] S. Haykin, “Neural Networks: Comprehensive Foundation”
correlators or Double Delta technique). Prentice Hall: Pearson Education Heg USA, 1998.
Tests with real data with a large number of reflected rays [6] F. Langlet , H. Abdulkader, D. Roviras, Mallet and F. Castanié,
show that the neural network still offers good results by “Comparison of Neural Network Adaptive Predistorsion
Techniques for Satellite Down Links”, IJCNN’2001,
reducing the code tracking error by a factor greater than 35% Washington (USA).
(in the case of the ISS Mission which is the most [7] E. J. Kaminsk and N. Deshpande, “TCM decoding using neural
constraining one) and the phase tracking error by a factor networks”, Engineering Applications of Artificial Intelligence,
greater than 50% (for the same Mission). Vol. 16, Issues 5-6, August-September 2003.
[8] G. A. McGraw and M. S. Brasch, “GNSS Multipath mitigation
It has also been shown that a careful selection of the Using Gated and High Resolution Correlator concepts,”
training phase is necessary (in particular the number of Proceeding of the ION 1999 National Technique Meeting, San
reflected rays) and will lead systematically to better Diego, January 1999.
performance. [9] D. Roviras, H. Abdulkader, A. Mallet, H. Tap-Beteille, M.
Lescure and F. Castanie, “MLP Neural Network Implementation
V. CONCLUSION and Integration in CMOS Technology”, ICTTA’04, Damas
Syria, April 2004.
In this paper we presented a neural networks based
technique to mitigate the effect of the multipath phenomenon
2678