Deep Learning Based Optimization in Massive MIMO Systems
Deep Learning Based Optimization in Massive MIMO Systems
“Performance Analysis of Massive To summarise the various Protocol Based Technique Lacks the ability to
4 MIMO under pilot contamination” past research work done and Precoding Technique incorporate constraints that
Journal of Physics. analyse the various could be beneficial for pilot
mitigation schemes. Blind Technique assignment optimization.
“Studying and Investigation of Energy Calculating Energy MMSE, ZF encoder for Less robust to dynamic
Efficiency in massive MIMO Using Efficiency in Massive MIMO channel estimation environments as it is sensitive
MMSE, ZF encoder” IEEE 21st using different algorithms to changes in channel
5 international Conference on Sciences conditions.
and Techniques of Automatic Control
and Computer Engineering (STA).
S.NO Title of Paper & Journal Name Objective of Paper Techniques Used Limitations
“Data-Driven Deep Learning to Design To propose a channel End-to-end DNN Can be complex and difficult
Pilot and Channel Estimator for estimation scheme to jointly to interpret, making it hard for
6 Massive MIMO” Conference Paper, design the pilot signals and decision making and
IEEE
channel estimator. troubleshooting.
“Scalable Pilot Assignment Scheme for Minimizing maximum Deep Neural Network for It requires a significant
Cell-Free Large-Scale Distributed interference algorithm for effective Pilot Allocation amount of data for training,
7 MIMO With Massive Access” Journal better pilot allocation. and their performance can be
Article, IEEE
limited when trained
insufficiently.
“Channel Estimation Based on Pilot To search the optimized Iterative technique to search Poor performance for systems
Signal and Iterative Method for TDD coefficients in LS algorithm the optimized coefficients with large number of
8 Based Massive MIMO Systems” IEEE and modulating a pilot signal in LS algorithm and parameters, as it can become
Xplore modulating a pilot signal
to restore orthogonal computationally intensive.
property. with Hadmard code
BLOCK DIAGRAM
Transmitting
signals Self-Organizing
Map Channel
Transmitter
(as a precoder)
P(X)
Received Signals
NEURAL TOOLS
● Neural Net Fitting
● Neural Net Clustering
DEEP NEURAL NETWORK
• Deep Neural Networks (DNNs) consist of multiple layers of
interconnected nodes. These layers are hierarchical, with each layer
building upon the representations learned by the previous layers.
• The "deep" in DNNs refers to the number of layers they contain.
Multiple layers enable DNNs to learn complex features and
relationships in data.
• The first layer is the input layer, which receives raw data.
• Hidden layers process and transform the data.
• The output layer produces predictions or classifications based on
the processed data.
• Neural networks' performance can be assessed by analyzing their
predictions and verifying the correlation with input data.
NEURAL NET FITTING
• The neural network that is being trained. The network has one input layer,
ten hidden layer, and four output layer.
• Gradient: The gradient is a measure of how much the error changes with
respect to the weights of the connections between the neurons. It is used by
the training algorithm to adjust the weights in order to reduce the error.
• Validation Checks: This section shows the number of times that the
validation set has been checked during training. The validation set is used to
monitor the network's performance during training and to help prevent
overfitting. Overfitting is a condition that occurs when the network
learns the training data too well and does not perform well on new data.
Levenberg-Marquardt (LM) algorithm
• The Levenberg-Marquardt (LM) algorithm is a widely used optimization technique for training neural networks,
particularly in cases where the network's performance is sensitive to initial parameter values or when training
data is noisy.
• One of the key features of the LM algorithm is its ability to adaptively adjust the learning rate during training.
• The LM algorithm is known for its efficiency and effectiveness in optimizing neural networks, particularly in
situations where other optimization algorithms may struggle.
PRECODING
• Precoding is a signal processing technique in wireless communication to
enhance data transmission quality and reliability. It manipulates
transmitted signals before transmission to optimize reception at the
receiver.
• A precoding matrix is calculated based on channel information to guide
signal modification for optimal transmission.
• Original data signals are multiplied by the precoding matrix to optimize
them considering factors like interference and channel characteristics.
• The modified signals, optimized by the precoding matrix, are
transmitted to the receiver to improve overall communication quality
and reliability.
SELF-ORGANIZING MAPS (SOM)
• Self-Organizing Maps (SOM), also known as Kohonen maps, designed for unsupervised learning. It can be used as precoders in
certain applications, particularly in communication systems.
• By training a SOM with information about the communication channel, the SOM can learn to adaptively adjust the transmitted
signals to exploit the spatial characteristics of the channel, optimizing the performance of the communication system.
NEURAL NET CLUSTERING
Batch Weight/Bias Rules (trainbu) - This algorithm is used to train
the network by updating the weights and biases of the neurons based
on the entire training set at each iteration
Epoch – Number of times we give the data to the neural networks. It
should be sufficient amount, which result is very equivalent to the
original curve. By multiple epoch, we can improve the weights
further.
Iterations – Number of batches needed to complete one epoch.
Batch – One epoch is too big to feed to the computer at once. So, we
divide it in several smaller batches.
WORK DONE
Study of Levenberg-Marquardt algorithm and Self-organizing Map (SOM).
Generated input signals for ‘n’ users in MIMO systems using mimoInputGenerator() function.
Implementation of Self-Organizing Maps as a precoder at the transmitter using nctool.
Generating Gaussian noise and mixing it with the transmitting signals for real-time observations.
Implementation of Deep Neural Network at the receiver to analyze the precoded transmitted signals
by using Deep_neural_computing() as a function.
Calculation of Throughput and Packet Delivery Ratio (PDR) and plotting those values in a graph
under the function pdr_tp().
Calculation of Bit Error Rate (BER) and Signal to Noise Ratio (SNR) and plotting those values in a
graph under the function of snr_ber().
CODE SNIPPET
This mimoInputGenerator() function creates a random data stream (signals) for the multiple antennas (channels) in the MIMO
system. It takes the number of signals, number of samples per signal, and sampling rate as inputs ,its output is a matrix of random
signal values for each signals and a time vector corresponding of the samples. The signal generation can be customized within the
code to create different types of signals for the simulation
In this function tx_precoder(), the generated input signals are fed as input to the precoder. Here precoding is done using Self-
Organizing Map (SOM) neural network for data stream manipulation. So, the above function transforms the raw input data signal
into a precoded signal which is transmitted over the channel.
In this function rx_precoded_data_analyser(), sample rate is fed as input to ensure same sampling rate at both
transmitter and receiver side. Now the received signal is observed and its performance metrics are calculated by
comparing it with the transmitted signal with using neural networks.
• This function defines and trains a deep neural network for classification tasks. It performs network training, evaluation, and provides
functionalities to analyze the classification performance.
• Setting up the neural network architecture and chooses the Levenberg-Marquardt algorithm (trainlm) for network training.
• Splits the data into training, validation, and testing sets using a random split ('dividerand') based on samples ('sample’)
• Computes various metrics like True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN).
• This defines a function named pdr_tp() that takes one input
argument num_of_bits representing the total number of bits
to be transmitted.
• The function returns four outputs:
• pd: Array containing the number of packets delivered in
each iteration.
• dl: Array containing the number of packets lost in each
iteration.
• tp: Array containing the throughput (data rate) in each
iteration.
• pdr_out: Array containing the Packet Delivery Ratio
(PDR) in each iteration.
• The average packet size as 80 bytes (8 bits per byte). This
value is used to convert the number of bits delivered into the
number of packets.
This function plots the bit error rate (BER) performance of a communication system:
• Generates random data and simulates bit errors based on a chosen probability.
• Calculates theoretical and estimated BER for different signal-to-noise ratios (SNR).
• Plots the BER curves for both theoretical and estimated values.
By using this function, calculating how well a classification model
performs. It takes true/false positives/negatives (TPV, TNV, FPV, FNV) as
input.
Accuracy (ACC): Measures the overall correctness of the model,
calculated as the rate of correctly predicted samples to the total number of
samples.
Precision (PREC): Ratio of true positives among predicted positives. It
states that how consistently the model gives the output.
Recall (REC): Ratio of true positives identified out of all actual positives.
Measures the ability of the model to correctly identify positive samples out
of all actual positive samples.
F1 Score (F1SCO): It provides a balance between precision and recall.
Specificity (SPEC): Ratio of true negatives among all actual negatives.
Measures the ability of the model to correctly identify negative samples.
MCC (MCC): It represents the correlation coefficient between the
observed and predicted classifications.
EXPLANATION OF OUTPUTS
• The x-axis is labeled "SNR" (Signal-to-Noise Ratio) in dB, MIMO-PRECODED PERFORMANCE
ranging from 1 dB to 10 dB, representing the ratio between the
desired signal power and background noise power.
• The y-axis is labeled "BER" (Bit Error Rate), ranging from 10^-5
to 1, indicating the frequency of errors in received bits.
• In the Theoretical BER curve represents the BER performance of
the communication system. It starts at around 1 (very high error
rate) at 1 dB SNR and improves as the SNR increases. However,
even at 10 dB SNR, the BER remains moderately high, at around
10^-4.
• Estimated BER curve, It shows a significant improvement in
BER compared to the existing model. The BER with coding is
much lower across all SNR values. At 10 dB SNR, the coded
BER is around 10^-2. By this, the estimated BER is closer to
the Theoretical BER.
Existing model Proposed model
PDR-MIMO-PRECODED PERFORMANCE
• The x-axis of the graph represents the PDR score (Packet Delivery
Ratio) obtained from the simulation.
• The y-axis indicates the probability of achieving a specific PDR score,
presenting the distribution of PDR values.
• Two curves are visible on the plot, likely representing the theoretical
PDR (ideal scenario) and the PDR achieved in the simulation (proposed
PDR).
• The close proximity of both curves suggests that the proposed PDR
performs well, aligning closely with the theoretical expectations
CORRELATION OF Tx MIMO/Rx MIMO
• The x-axis represents the number of MIMO channels, with MIMO
referring to a technology utilizing multiple antennas for transmission and
reception.
• The graph likely illustrates how the correlation between transmitted (Tx
MIMO) and received (Rx MIMO) signals varies with the number of
channels in a MIMO system.
• A difference of 0 between Peak RSSI (Tx and Rx) would signify perfect
positive correlation, indicating minimal signal degradation or distortion
across all channels. Conversely, a difference of -1 indicates less error in
correlation.
• Classification model evaluation metrics such as accuracy, precision,
recall, F1 score, specificity, and Matthews correlation coefficient are vital
for assessing machine learning model effectiveness.
• These metrics are essential for tasks like model selection, parameter
tuning, and overall performance optimization.
• The metrics suggest that the model excels in delivering data packets
with minimal errors, indicating high throughput and Packet Delivery
Ratio (PDR).