0% found this document useful (0 votes)
72 views21 pages

Neural Network Based DPD

1. The document proposes using an attention mechanism with DNNs to select the most important input signals and reduce complexity for modeling power amplifiers (PAs). 2. It also explores using bidirectional LSTMs (BiLSTMs) to model the long memory effects in PAs and build a behavioral model and digital predistortion (DPD). The BiLSTMs bridge the memory in PAs and neural networks. 3. Issues like phase ambiguity are addressed by introducing time delays and quantizing predictions to handle ambiguous phase regions when training the PA behavioral model.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views21 pages

Neural Network Based DPD

1. The document proposes using an attention mechanism with DNNs to select the most important input signals and reduce complexity for modeling power amplifiers (PAs). 2. It also explores using bidirectional LSTMs (BiLSTMs) to model the long memory effects in PAs and build a behavioral model and digital predistortion (DPD). The BiLSTMs bridge the memory in PAs and neural networks. 3. Issues like phase ambiguity are addressed by introducing time delays and quantizing predictions to handle ambiguous phase regions when training the PA behavioral model.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Neural network based DPD

Gebre Fisehatsion Mesfin


Attention based DNN behavioral model for
wideband wireless power amplifier
• To reduce the number of input signals which can significantly affect
the complexity of the ANNs.
• The input terms with large contribution are selected for offline
modeling then, they will be injected into the DNN to build the PA
model online.
• As a result the design complexity will be reduced.
Continued..
• ANNs were widely used for modeling the PA. They are studied by
injecting the in-phase and quadrature-phase (I/Q) components and
the envelop-dependent terms of the current and past signals.
• DNN is also used with strong fitting ability to improve modeling
performance.
• The complexity of the ANNs is affected by the complexity of the I/Q
and the envelop dependent terms of the input signal.
• Attention is used to improve this problem.
• Attention mechanism to the DNN evaluate the contribution of the
input signals.
• Inputs with large contributions are selected offline. Therefore the
complexity will be reduced.
• Then the retained input items are injected in to DNN.
Input module

• Input module contains the input items and output items.

• Correlation between the input and output states has to be calculated.


This is done by using “tanh”
• Parameters are
• SoftMax is used to normalize the correlation value
• α(i) used as a weight of the ith input
to highlight the important features.
DNN module
• Used to obtain the modeling output.
• Three fully connected hidden layers and one output layer.
• [10, 10, 5] for the distribution of neurons of the hidden layer.
• Tanh is used for the activation function.

• The output layers has two neurons


• Overfitting are prevented by dividing the test sets into training set, a
verification set and a test set in a ratio of 3:1:1.
• weight threshold α are set according to the required modeling
performance or the number of input items. Input items less than the
weight threshold value are removed. The retained input items are:
BiLSTM based behavioral modeling and
linearization of wideband RF-PA
• A bridge has been made in between memory effect of the nonlinear
PAs and memory of BiLSTM neural network.
• BiLSTM based PA’s behavioral model and its DPD is built by reconciling
the non-causality.
• The uncertainty of the tested PA when transforming phase mitigated
by using additional model.
Bridge between PAs and BiLSTM networks
• Long term memory effect is a major factor which is highly present in
PAs such as GaN HEMT-based.
• Volterra series can represent this strong memory effect in the form:
• The above expression is about the non-causal Causality can be
attained by introducing memory depth Tm.

• Non-causal observation of y(t)


• The non-causal approximation used to facilitate the modeling of y(t)
as a future stimuluses.

• By using the above transformation Volterra series can be represented


by a static and dynamic form.
• De-embedding the short term and long-term dynamics
• Digital baseband stimuluses and responses are utilized to undertake a
regression task and then the training model M1(·) will be:

• M1(·) can be derived in to a nonlinear transformation between the


stimulus and response by using a feedforward LSTM network

• From the non-causal it can be also expressed by backward LSTM


• By using the middle of the predicted sequences, both the forward and
backward LSTM networks can facilitate prediction of the middle
response, thus the memory between the memory of the PAs which
was modeled by the Volterra formalism and the memory of BiLSTM
networks can be made into an agreement.
• Vanishing gradient problems of RNN, LSTM are designed by using
memory units.
Behavioral modeling based on BiLSTM
• There are five layers for the behavioral modeling
• input layer
• BiLSTM layer
• Three fully connected layer
• With memory depth of Tm the inputs of BiLSTM are a sequence of
digital baseband samples with memory length of Tm. Therefore the
data fed for BILSTM are 2xTm
AI-DPD processing
• DPD linearization method put forward by using proper inverse
technique. This means the best architecture of the PA’s model has
been selected and then the corresponding DPD model is trained.
• Not an analytical inverse however it shares a structure with input Yt
and output Xt.
• Because of the non-causality introduced in the BiLSTM DPD structure
then the time-delay is introduced to reconcile the future input .
Phase ambiguity
• The PA’s phase ambiguity will leads to phase ambiguity of the
behavioral model and DPD model which can affect the linearization
performance.
• When the training relationship between the input phases and
output phase changes. The predicted output phases are quantized for
the most ambiguous regions by using

You might also like