0% found this document useful (0 votes)
48 views34 pages

Deep Learning Based Optimization in Massive MIMO Systems

ppt

Uploaded by

Siddharth. R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views34 pages

Deep Learning Based Optimization in Massive MIMO Systems

ppt

Uploaded by

Siddharth. R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Puducherry Technological University

Department of Electronics and Communication Engineering

EC232 – PROJECT WORK


DEEP LEARNING BASED OPTIMIZATION FOR
MASSIVE MIMO SYSTEMS
Under the Guidance of :
TEAM MEMBERS
Dr. S. TAMILSELVAN
SHRIKETH - (20EC1099)
Professor SIDDHARTH R- (20EC1101)
Department of SUDHANSHU UPADHYAY - (20EC1111)
ECE,PTU MOHAMMED RAFIQ N - (20CE1055)
MOTIVATION
● 5G is rapidly becoming a reality in the modern world. Massive MIMO is one of the key techniques of the 5G.
Massive MIMO focuses on the aspect of improving the spectral efficiency, with that high data rates can be
achieved.
● There are several bottlenecks in the implementation of Massive MIMO. Interference is one of the most
prominent issues in massive MIMO.
● So, the main focus of the network industry is assessing the impact of pilot contamination in Massive MIMO.
Prior works on energy efficiency and pilot contamination mitigation have been thoroughly studied and analyzed.
OBJECTIVE
● Implementation of advanced precoding techniques
in a Massive MIMO system and minimizing
interference between data streams transmitted
simultaneously.

● This targeted approach directly improves the


system's performance at receiver by giving a clear
and strong signal.
LITERATURE SURVEY
S.NO Title of Paper & Journal Name Objective of Paper Techniques Used Limitations
“Supervised Deep Learning for MIMO To utilize deep learning Autoencoders can be used for It requires large
Precoding” IEEE 3rd 5G World Forum techniques to design supervised feature learning or amount of labeled
1
(5GWF). efficient precoding dimensionality reduction in MIMO data, generating
schemes for MIMO. precoding. labeled data is
expensive and time
consuming.
“Secure Precoding in MIMO-NOMA: A Deep A novel signaling design The proposed DNN linearly This system is
2 Learning Approach” IEEE Wireless for secure transmission precodes each user’s signal before designed only for
Communications Letters over two-user MIMO superimposing them and achieves the two-user in
non-orthogonal multiple near-optimal performance with MIMO system
access channel using significantly lower run time.
deep neural networks.
“Deep Learning based Multi-User Power Efficient use of pilot A deep learning-based power Dynamic
3 Allocation and Hybrid Precoding in Massive power allocation among allocation (DL-PA) and hybrid movement of the
MIMO Systems” IEEE International the users, which results in precoding technique for multi-user user is haven’t
Conference on Communications. mitigating the pilot massive multiple-input multiple- shown up in this
contamination. output (MU-mMIMO) systems. system.
S.NO Title of Paper & Journal Name Objective of Paper Techniques Used Limitations

“Performance Analysis of Massive To summarise the various Protocol Based Technique Lacks the ability to
4 MIMO under pilot contamination” past research work done and Precoding Technique incorporate constraints that
Journal of Physics. analyse the various could be beneficial for pilot
mitigation schemes. Blind Technique assignment optimization.

“Studying and Investigation of Energy Calculating Energy MMSE, ZF encoder for Less robust to dynamic
Efficiency in massive MIMO Using Efficiency in Massive MIMO channel estimation environments as it is sensitive
MMSE, ZF encoder” IEEE 21st using different algorithms to changes in channel
5 international Conference on Sciences conditions.
and Techniques of Automatic Control
and Computer Engineering (STA).
S.NO Title of Paper & Journal Name Objective of Paper Techniques Used Limitations

“Data-Driven Deep Learning to Design To propose a channel End-to-end DNN Can be complex and difficult
Pilot and Channel Estimator for estimation scheme to jointly to interpret, making it hard for
6 Massive MIMO” Conference Paper, design the pilot signals and decision making and
IEEE
channel estimator. troubleshooting.

“Scalable Pilot Assignment Scheme for Minimizing maximum Deep Neural Network for It requires a significant
Cell-Free Large-Scale Distributed interference algorithm for effective Pilot Allocation amount of data for training,
7 MIMO With Massive Access” Journal better pilot allocation. and their performance can be
Article, IEEE
limited when trained
insufficiently.

“Channel Estimation Based on Pilot To search the optimized Iterative technique to search Poor performance for systems
Signal and Iterative Method for TDD coefficients in LS algorithm the optimized coefficients with large number of
8 Based Massive MIMO Systems” IEEE and modulating a pilot signal in LS algorithm and parameters, as it can become
Xplore modulating a pilot signal
to restore orthogonal computationally intensive.
property. with Hadmard code
BLOCK DIAGRAM
Transmitting
signals Self-Organizing
Map Channel
Transmitter
(as a precoder)

P(X)

Received Signals

Receiver Detector Deep neural network

P(X) – Precoded Transmitter matrix


PROPOSED SYSTEM
 In this proposed system, we have come up with the idea of using precoder at the transmitter
and deep neural network for analyzing the received signals.
 By using this system, better interference management can be achieved due to accurate
channel estimation and overall performance is increased.
REQUIREMENTS
SOFTWARE
● MATLAB R2021a

NEURAL TOOLS
● Neural Net Fitting
● Neural Net Clustering
DEEP NEURAL NETWORK
• Deep Neural Networks (DNNs) consist of multiple layers of
interconnected nodes. These layers are hierarchical, with each layer
building upon the representations learned by the previous layers.
• The "deep" in DNNs refers to the number of layers they contain.
Multiple layers enable DNNs to learn complex features and
relationships in data.
• The first layer is the input layer, which receives raw data.
• Hidden layers process and transform the data.
• The output layer produces predictions or classifications based on
the processed data.
• Neural networks' performance can be assessed by analyzing their
predictions and verifying the correlation with input data.
NEURAL NET FITTING
• The neural network that is being trained. The network has one input layer,
ten hidden layer, and four output layer.

• dividerand - This algorithm is used to randomly divide the data into


training, validation, and test sets. The training set is used to train the
network, the validation set is used to monitor the network's performance
during training, and the test set is used to evaluate the network's performance
after training is complete.

• Mean Squared Error (mse) - This is a common performance measure for


neural networks. It is the average of the squared differences between the
network's outputs and the desired outputs. The lower the mean squared error,
the better the network is performing.
NEURAL NET FITTING

• Gradient: The gradient is a measure of how much the error changes with
respect to the weights of the connections between the neurons. It is used by
the training algorithm to adjust the weights in order to reduce the error.

• Validation Checks: This section shows the number of times that the
validation set has been checked during training. The validation set is used to
monitor the network's performance during training and to help prevent
overfitting. Overfitting is a condition that occurs when the network
learns the training data too well and does not perform well on new data.
Levenberg-Marquardt (LM) algorithm
• The Levenberg-Marquardt (LM) algorithm is a widely used optimization technique for training neural networks,
particularly in cases where the network's performance is sensitive to initial parameter values or when training
data is noisy.

• One of the key features of the LM algorithm is its ability to adaptively adjust the learning rate during training.

• The LM algorithm is known for its efficiency and effectiveness in optimizing neural networks, particularly in
situations where other optimization algorithms may struggle.
PRECODING
• Precoding is a signal processing technique in wireless communication to
enhance data transmission quality and reliability. It manipulates
transmitted signals before transmission to optimize reception at the
receiver.
• A precoding matrix is calculated based on channel information to guide
signal modification for optimal transmission.
• Original data signals are multiplied by the precoding matrix to optimize
them considering factors like interference and channel characteristics.
• The modified signals, optimized by the precoding matrix, are
transmitted to the receiver to improve overall communication quality
and reliability.
SELF-ORGANIZING MAPS (SOM)
• Self-Organizing Maps (SOM), also known as Kohonen maps, designed for unsupervised learning. It can be used as precoders in
certain applications, particularly in communication systems.

• By training a SOM with information about the communication channel, the SOM can learn to adaptively adjust the transmitted
signals to exploit the spatial characteristics of the channel, optimizing the performance of the communication system.
NEURAL NET CLUSTERING
Batch Weight/Bias Rules (trainbu) - This algorithm is used to train
the network by updating the weights and biases of the neurons based
on the entire training set at each iteration
Epoch – Number of times we give the data to the neural networks. It
should be sufficient amount, which result is very equivalent to the
original curve. By multiple epoch, we can improve the weights
further.
Iterations – Number of batches needed to complete one epoch.
Batch – One epoch is too big to feed to the computer at once. So, we
divide it in several smaller batches.
WORK DONE
 Study of Levenberg-Marquardt algorithm and Self-organizing Map (SOM).
 Generated input signals for ‘n’ users in MIMO systems using mimoInputGenerator() function.
 Implementation of Self-Organizing Maps as a precoder at the transmitter using nctool.
 Generating Gaussian noise and mixing it with the transmitting signals for real-time observations.
 Implementation of Deep Neural Network at the receiver to analyze the precoded transmitted signals
by using Deep_neural_computing() as a function.
 Calculation of Throughput and Packet Delivery Ratio (PDR) and plotting those values in a graph
under the function pdr_tp().
 Calculation of Bit Error Rate (BER) and Signal to Noise Ratio (SNR) and plotting those values in a
graph under the function of snr_ber().
CODE SNIPPET

This mimoInputGenerator() function creates a random data stream (signals) for the multiple antennas (channels) in the MIMO
system. It takes the number of signals, number of samples per signal, and sampling rate as inputs ,its output is a matrix of random
signal values for each signals and a time vector corresponding of the samples. The signal generation can be customized within the
code to create different types of signals for the simulation
In this function tx_precoder(), the generated input signals are fed as input to the precoder. Here precoding is done using Self-
Organizing Map (SOM) neural network for data stream manipulation. So, the above function transforms the raw input data signal
into a precoded signal which is transmitted over the channel.
In this function rx_precoded_data_analyser(), sample rate is fed as input to ensure same sampling rate at both
transmitter and receiver side. Now the received signal is observed and its performance metrics are calculated by
comparing it with the transmitted signal with using neural networks.
• This function defines and trains a deep neural network for classification tasks. It performs network training, evaluation, and provides
functionalities to analyze the classification performance.
• Setting up the neural network architecture and chooses the Levenberg-Marquardt algorithm (trainlm) for network training.
• Splits the data into training, validation, and testing sets using a random split ('dividerand') based on samples ('sample’)
• Computes various metrics like True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN).
• This defines a function named pdr_tp() that takes one input
argument num_of_bits representing the total number of bits
to be transmitted.
• The function returns four outputs:
• pd: Array containing the number of packets delivered in
each iteration.
• dl: Array containing the number of packets lost in each
iteration.
• tp: Array containing the throughput (data rate) in each
iteration.
• pdr_out: Array containing the Packet Delivery Ratio
(PDR) in each iteration.
• The average packet size as 80 bytes (8 bits per byte). This
value is used to convert the number of bits delivered into the
number of packets.
This function plots the bit error rate (BER) performance of a communication system:
• Generates random data and simulates bit errors based on a chosen probability.
• Calculates theoretical and estimated BER for different signal-to-noise ratios (SNR).
• Plots the BER curves for both theoretical and estimated values.
By using this function, calculating how well a classification model
performs. It takes true/false positives/negatives (TPV, TNV, FPV, FNV) as
input.
Accuracy (ACC): Measures the overall correctness of the model,
calculated as the rate of correctly predicted samples to the total number of
samples.
Precision (PREC): Ratio of true positives among predicted positives. It
states that how consistently the model gives the output.
Recall (REC): Ratio of true positives identified out of all actual positives.
Measures the ability of the model to correctly identify positive samples out
of all actual positive samples.
F1 Score (F1SCO): It provides a balance between precision and recall.
Specificity (SPEC): Ratio of true negatives among all actual negatives.
Measures the ability of the model to correctly identify negative samples.
MCC (MCC): It represents the correlation coefficient between the
observed and predicted classifications.
EXPLANATION OF OUTPUTS
• The x-axis is labeled "SNR" (Signal-to-Noise Ratio) in dB, MIMO-PRECODED PERFORMANCE
ranging from 1 dB to 10 dB, representing the ratio between the
desired signal power and background noise power.
• The y-axis is labeled "BER" (Bit Error Rate), ranging from 10^-5
to 1, indicating the frequency of errors in received bits.
• In the Theoretical BER curve represents the BER performance of
the communication system. It starts at around 1 (very high error
rate) at 1 dB SNR and improves as the SNR increases. However,
even at 10 dB SNR, the BER remains moderately high, at around
10^-4.
• Estimated BER curve, It shows a significant improvement in
BER compared to the existing model. The BER with coding is
much lower across all SNR values. At 10 dB SNR, the coded
BER is around 10^-2. By this, the estimated BER is closer to
the Theoretical BER.
Existing model Proposed model
PDR-MIMO-PRECODED PERFORMANCE

• The x-axis of the graph represents the PDR score (Packet Delivery
Ratio) obtained from the simulation.
• The y-axis indicates the probability of achieving a specific PDR score,
presenting the distribution of PDR values.
• Two curves are visible on the plot, likely representing the theoretical
PDR (ideal scenario) and the PDR achieved in the simulation (proposed
PDR).
• The close proximity of both curves suggests that the proposed PDR
performs well, aligning closely with the theoretical expectations
CORRELATION OF Tx MIMO/Rx MIMO
• The x-axis represents the number of MIMO channels, with MIMO
referring to a technology utilizing multiple antennas for transmission and
reception.

• The y-axis indicates the difference in Peak RSSI (Received Signal


Strength Indicator) between expected and measured values.

• The graph likely illustrates how the correlation between transmitted (Tx
MIMO) and received (Rx MIMO) signals varies with the number of
channels in a MIMO system.

• Correlation is determined by analyzing the difference in Peak RSSI


between the Tx and Rx sides for each channel.

• A difference of 0 between Peak RSSI (Tx and Rx) would signify perfect
positive correlation, indicating minimal signal degradation or distortion
across all channels. Conversely, a difference of -1 indicates less error in
correlation.
• Classification model evaluation metrics such as accuracy, precision,
recall, F1 score, specificity, and Matthews correlation coefficient are vital
for assessing machine learning model effectiveness.

• These metrics are essential for tasks like model selection, parameter
tuning, and overall performance optimization.

• The metrics suggest that the model excels in delivering data packets
with minimal errors, indicating high throughput and Packet Delivery
Ratio (PDR).

• The model demonstrates remarkably high precision, indicating its


proficiency in correctly identifying positive instances.

• The combination of low overall accuracy with significantly high


precision suggests the model's capability to minimize false positives, even
in scenarios with imbalanced datasets.
ADVANTAGES
• Channel will be used efficiently
• System’s performance is increased.
• Packet Delivery Rate and Throughput gets increased.
• Lower Bit Error Rate (BER).
• Higher Signal to Noise Ration (SNR)
CONCLUSION
• This project proposes an augmentation of Massive MIMO system performance through mitigating interference among data
streams, facilitated by precoding at the transmitter.
• Leveraging precoding techniques alongside neural network analysis, the performance of the system was comprehensively
evaluated.
• Simulation outcomes reveal a significant increase in Signal-to-Noise Ratio (SNR), along with lower Bit Error Rates (BER) and a
high Packet Delivery Rate (PDR), indicating promising advancements in wireless communication within future network
deployments.
• These findings underscore the potential of the proposed approach to address the burgeoning demand for high-capacity and
reliable wireless communication services in next-generation networks.
REFERENCES
1. Aravind Ganesh Pathapati , Nakka Chakradhar, PNVSSK Havish, Sai Ashish Somayajula, Saidhiraj Amuru, “Supervised Deep Learning
for MIMO Precoding” IEEE Access, P. No. 418-423, January 2024. DOI: 10.1109/5GWF49715.2020.9221261
2. Jordan Pauls and Mojtaba Vaezi, “Secure Precoding in MIMO-NOMA: A Deep Learning Approach” IEEE Access, IEEE Wireless
Communications Letters, vol. 11, P. No. 77-80, January 2022. DOI: 10.1109/LWC.2021.3120594
3. Chayaphol Karnna, Sakol Udomsiri, Samphan Phrompichai, “Channel Estimation Based On Pilot Signal and Iterative Method for TDD
Based Massive MIMO Systems” IEEE Xplore. P. No. 1-4. March 2022. DOI: 10.1109/iEECON53204.2022.9741650
4. Asil Koc, Mike Wang, Tho Le-Ngoc, “Deep Learning based Multi-User Power Allocation and Hybrid Precoding in Massive MIMO
Systems” IEEE Access, P. No. 16-20, May 2022. DOI: 10.1109/ICC45855.2022.9839162
5. Abdussalam Masaud Ammar, Amira Youssef Ellafi, Amer R. Zerek, “Studying and Investigation of Energy Efficiency in massive MIMO
Using MMSE, ZF and MRT Algorithms” IEEE 21st International Conference on Sciences and Techniques of Automatic Control and
Computer Engineering (STA), P. No. 705-709, December 2022. DOI: 10.1109/STA56120.2022.10019103
REFERENCES
6. Ragit Dutta, “Performance Analysis of Massive MIMO under pilot contamination”. Journal of Physics: Conference Series, IOP 4th
International Conference on Intelligent Circuits and Systems. P. No. 1-9. December 2022. DOI:10.1088/1742-6596/2327/1/012051
7. Xisuo Ma and Zhen Gao, “Data-Driven Deep Learning to Design Pilot and Channel Estimator for Massive MIMO”. Conference
Paper, IEEE. vol. 69. P. No. 5677-5682. March 2021. DOI: 10.1109/TVT.2020.2980905
8. Jiamin Li, Zhenggang Wu, Pengcheng Zhu, Dongming Wang, and Xiaohu You, “Scalable Pilot Assignment Scheme for Cell-Free
Large-Scale Distributed MIMO With Massive Access”. Journal Article, IEEE. vol. 9. P. No. 122107–122112. September 2021. DOI:
10.1109/ACCESS.2021.31102
THANK YOU

You might also like