A New Efficient Decoder of Linear Block Codes Based On Ensemble Learning Methods
A New Efficient Decoder of Linear Block Codes Based On Ensemble Learning Methods
Mohammed El Assad1, Said Nouh1, Imrane Chemseddine Idrissi1, Seddiq El Kasmi Alaoui2,
Bouchaib Aylaj3, Mohamed Azzouazi1
1
Laboratory of information technologies and modeling (LTIM), Faculty of Sciences Ben M’sick, Hassan II University,
Casablanca, Morocco
2
Laboratorium Information System (LIS), Faculty of Sciences Ain Chock, Hassan II University, Casablanca, Morocco
3
Department of Computer Sciences, CRMEF, Rabat, Morocco
Corresponding Author:
Mohammed El Assad
LTIM Lab, Faculty of Sciences Ben M’sick, Hassan II University
Casablanca, Morocco
Email: [email protected]
1. INTRODUCTION
The growing exchange and transmission of data in our society necessitates the implementation of
specialized processes to detect and correct errors that may arise during communication via various channels.
Whether it's through wired connections or wireless networks, ensuring the reliable transmission of digital
information is crucial. Regardless of the specific type of transmission medium being used, the primary focus
lies in establishing robust mechanisms that can handle error detection and correction, thereby maintaining the
integrity of the transmitted data.
The information can be of any type provided that it can be given a digital representation: texts, images,
sounds, and videos. The transmission of these types of data is ubiquitous in all systems related to data
processing and especially in the world of telecommunications. The latter is quite often parasitized; however, it
is essential that the information collected or transmitted is well received. There is therefore a need to "make
the transmission more reliable": this is the role of error correcting codes. During the transmission of the
message between the sender and the receiver, it undergoes several operations including coding and decoding.
In this article we focus our study on linear block codes. Each message to be processed (transmit and
store) is segmented into a set of blocks of k elements, where each block is coded by the channel coder in such
a way as to transform it into a block of n elements (n>k). In the rest of this paper we represent a code C by C(n,
k, d) where n, k, and d are respectively the dimension, the length and the minimum distance of the C code. G
is a systematic generator matrix of C and H a parity check matrix of C.
Within the realm of linear error-correcting codes, there exists a notable subgroup known as cyclic
codes. Unlike other linear codes, which are defined by generator matrices, cyclic codes are characterized by
generator polynomials. This distinction simplifies the encoding process, making it more efficient and suitable
for various applications. Two well-known instances of cyclic codes are BCH codes, named after their creators
Bose, Ray-Chaudhuri, and Hocquenghem, and quadratic residue (QR) codes. Both BCH and QR codes
leverage the cyclic properties to enhance error correction and data storage capabilities in diverse fields, such
as data transmission and storage.
Decoding an error-correcting code is an nondeterministic polynomial time (NP)-hard problem [1], [2].
Hard decision decoders work on the binary form of outputs of the transmission channel. In this article we focus
our work on the application of an artificial intelligence-based model for the decoding problem. Given the
complexity of the issue, numerous linear code decoding techniques have been developed. These include
algorithms developed by solving multivariate nonlinear equations derived from Newton's identities [3]–[5].
Chien [6] offer methods for deciphering binary systematic QR codes using lookup tables. The one-to-one
correlation between syndromes and correctable error patterns serves as the foundation for the decoding
technique. Without the requirement for operations like addition and multiplication over a limited field, the
technique uses lookup tables to directly find errors. They also discuss ways to use shift-search decoding to
lower memory needs.
Other approaches make use of local search and genetic algorithms. Some articles [7], [8] present some
learning-based algorithms for error correction. A new deep-learning technique for enhancing the belief
propagation (BP) algorithm for decoding linear block codes is presented in [7]. Imrane et al. [8] used a machine
learning approach along with a syndrome calculation to enhance the performance of another BP-based
technique, which is applicable to BCH and QR codes, in terms of bit error rate (BER) and time complexity. In
the same spirit, Nachmani et al. [9] have presented an architecture for recurrent neural networks that can correct
errors efficiently. In spite of the huge example space, it has been discovered that employing a feed-forward
neural network design can outperform classical BP decoding. Alaoui et al. [10] have used hash techniques in
conjunction with syndrome computation to create decoders with shorter run times. Their suggested decoders
work with linear codes. Chu et al. [11] presents an efficient algorithms to reduce the number of queries for the
guessing random additive noise decoding (GRAND) when the codes are systematic and cyclic. The artificial
reliabilities based decoding algorithm by using genetic algorithms (ARDecGA) decoder is described in [12]. It
computes an artificial reliability vector for the binary word received and uses a genetic algorithm to locate the
binary word with the highest likelihood at this vector. It creates a vector of artificial reliabilities from the binary
received word for its decoding procedure.
Boualame et al. [13] have presented a solution for decoding the QR(17, 9, 5) code. They propose a
methodology that involves identifying the positions of errors within the code. Specifically, they utilize the
inverse free Berlekamp-Massey algorithm [14] to decode the code by determining the error-locators of
algebraic-geometric codes. This approach offers a systematic way to decode the QR(17, 9, 5) code and retrieve
the encoded information accurately. According to Niharmine et al. [15], a novel soft decoding method based
on the simulated annealing (SA) algorithm is presented. The decoder's key contribution is that it provides
nearby solutions based on the received codeword's most accurate information as the starting solution. By
minimizing the search space and taking into account the error-correcting capability of the code, the
performances that they obtained are enhanced.
Joundan et al. [16] presented an evolutionary algorithm to design good linear codes with large
minimum weight and low dual minimum distance. Certain codes obtained using their method are the best in
terms of the minimum height distance they provide. For instance, the four codes listed in Table 1 have the
lowest distance that can be accommodated given their lengths and dimensions. SA is utilized to correct many
errors [17]. Many other decoders are developed to enhance correcting quality. Alaoui et al. [18] have studied the
efficiency of their decoders over a Rayleigh channel. Ruan [19] present general insights on applying neural network
decoders to satellite communications. Chen and Ye [20] proposed a neural decoder. Khebbou et al. [21] have
adapted a polar code decoding technique in favor of the extended Golay code. Boualame et al. [22] have
proposed a decoder that uses a condensed set of permutations drawn from the huge automorphism group of
QR codes to rectify t or fewer incorrect bits in the received word.
A new efficient decoder of linear block codes based on ensemble learning methods (Mohammed El Assad)
2238 ISSN: 2252-8938
In the realm of machine learning, computers have the ability to learn and evolve through experience,
without the need for explicit programming [23]. These machine learning models employ various approaches
to analyze and learn from data in order to make accurate predictions. To improve the accuracy and reliability
of predictions, ensemble methods are utilized, which combine the predictions of multiple predictors. Ensemble
learning encompasses different families of methods such as boosting, bagging, and stacking, each with its own
unique characteristics and advantages. In general, there are many families of methods like:
– Adaptatives methods (boosting) where the parameters are iteratively adapted to produce a better mixture.
Many weak learners learn sequentially and their decisions are combined following a deterministic
strategy. In this paper, we focus our work on this type of ensemble learning.
– Averaging methods (bagging, random forest) where many strong learners learn independently from each
other in parallel and their decisions are combined following some kind of deterministic averaging
process.
– Stacking that use a meta-model to output a prediction.
The principle of boosting: is to evaluate a sequence of weak learners on several slightly modified
versions of the training data. The decisions obtained are then combined by a weighted sum to obtain the final
model. Decorrelated weak classifiers can be generated by iteratively learning the classifiers and by modifying
the training sample at each iteration. The importance of well ranked examples decreases. The importance of
poorly classified examples increases. Obtained classifiers can be combined by computing the weighted sum of
decisions. There are many boosting algorithms. The best known is the adaptive boosting (AdaBoost) [24].
The AdaBoost algorithm: is a powerful machine learning algorithm that employs the concept of
combining weak classifiers to construct a robust classifier. This Algorithm 1 functions by iteratively adjusting
the weights of incorrectly classified instances. It assigns higher weights to misclassified examples in each
iteration, prompting subsequent weak classifiers to prioritize those instances, thus improving the overall
accuracy of the final classifier. The iterative nature of AdaBoost results in a strong classifier that excels at
handling complex and challenging classification tasks.
Boosting with scikit-learn: it is the AdaBoost classifier class that implements this algorithm. The most
important parameters used in this paper are as follows. i) n_estimators: integer, optional (default=10), the
number of weak classifiers; ii) learning_rate: controls the speed of change of the weights per iteration; and iii)
base_estimator: (default=decision tree classifier) the weak classifier used.
The rest of this paper is organized as follows: section 2 presents the proposed EL-BoostDec decoder
(hard decision decoder based on ensemble learning-boosting technique). In section 3 give some simulation
results of EL-BoostDec, their interpretations and make a comparison of the proposed decoder with some
competitors and we will discuss its power. Finally, last section presents a conclusion and perspectives.
2. METHOD
Presenting our decoder, EL-BoostDec, it functions as a robust hard decision decoder, leveraging the
power of ensemble learning and the boosting technique. The decoding process involves the computation of
syndromes, and the application of ensemble learning methods aids in the identification and correction of errors
within the received data. The preparation and operation of EL-BoostDec are executed in a systematic manner,
ensuring efficiency and accuracy in error correction. Our EL-BoostDec works as follows:
– Step 1: the preparation of the dataset containing the attributes that characterize different syndromes (the
n-k columns) and the classes that represent the errors, each error will be represented by an integer that is
the decimal version of the binary error vector encoded on n bits. In total, our dataset contains n-k+1
columns, the last of them represents the error. When a syndrome does not correspond to any correctable
error, of weight lower than or equal to the capability of correction of the code, the null error is attributed
to it. In the following, X represents the first n-k columns and Y will represent the last column which is
the error column in decimal format.
– Step 2: training a powerful EL-BoostDec classifier (an efficient machine learning model) based on
boosting methods to learn to find the error from the syndrome.
– Step 3: using the trained classifier to correct the data transmission errors.
Once the EL-BoostDec classifier has undergone training, its functionality aligns with the algorithm
described in the subsequent section, namely Algorithm 2. In this operational phase, the decoder applies the
acquired knowledge from the training process to effectively identify and correct errors in the received data.
This approach ensures a streamlined and optimized decoding process, showcasing the practical implementation
and effectiveness of EL-BoostDec in error correction tasks.
5
✓ v EL − BoostDec (S): prediction of the error v that corresponds to S
✓ e binary version of the number v
6 ✓ c b ⊕e
7 End
zero. The number of correctable errors (NEC), depends on the error capability of the code t:
d−1
NEC = 1 + ∑ti=1(ni), with t = ⌊ ⌋ where ⌊x⌋ represents the largest integer less than or equal to x. For any
2
error pattern e, of length n and weight w less than or equal to the correction capability t, we compute its binary
syndrome S(e)=e. HT and add it into X, in the corresponding row (of the same index) we store in Y the integer
that represents the correctable error e associated with S.
From Figure 3, we deduced that the EL-BoostDec decoder used to decode the BCH(31, 16, 7) code
reaches a BER equal to 10-5 at the SNR 7.6 dB. When it is used to decode the BCH(31, 26, 3) code, it achieves
the same BER at SNR 8.2 dB, i.e. a coding gain of 0.6 dB. From Figure 4, we deduced that the
EL-BoostDec decoder used to decode the BCH(63, 51, 5) code reaches a BER equal to 10 -5 at the SNR 7.4 dB,
i.e. a coding gain of about 2.2 dB. When it is used to decode the BCH(63, 57, 3) code, it achieves the same
BER at SNR 8 dB, i.e. a coding gain of about 1.6 dB.
Figure 2. Performances of EL-BoostDec for some Figure 3. Performances of EL-BoostDec for some
BCH codes of length 15 BCH codes of length 31
A new efficient decoder of linear block codes based on ensemble learning methods (Mohammed El Assad)
2242 ISSN: 2252-8938
3.6.2. Comparison of EL-BoostDec, HSDec, ARDEcGA, and BERT decoders [10], [12], [25]
Figure 10 provides a comprehensive performance comparison of decoding algorithms, including
EL-BoostDec, hash and syndrome decoding (HSDec) [10], ARDEcGA [12], and bit error rate test (BERT)
decoder [25], specifically applied to the BCH(15, 7, 5) code. The analysis of this comparison reveals that
EL-BoostDec exhibits comparable performance to HSDec and ARDEcGA, and notably, it surpasses the BERT
decoder for this particular code. This insight underscores the competitive and effective nature of the EL-
BoostDec algorithm in decoding BCH(15, 7, 5) codes, showcasing its potential as a robust
error-correction tool.
Figure 8. The impact of the max depth value on Figure 9. The impact of the learning rate parameter
EL-BoostDec performances on EL-BoostDec performances
A new efficient decoder of linear block codes based on ensemble learning methods (Mohammed El Assad)
2244 ISSN: 2252-8938
Figure 10. Performances of EL-BoostDec, HSDec, ARDEcGA, and BERT decoder for BCH(15, 8, 5) code
3.6.7. Comparison of EL-BoostDec and the simulated annealing decoder of Aylaj and Belkasmi [17]
Aylaj and Belkasmi [17] introduces a novel variant of the SA method for error correction. To evaluate
its efficacy, a performance comparison was conducted between EL-BoostDec and this SA decoder, specifically
applied to the BCH(31, 16, 7) and BCH(15, 7, 5) codes. Remarkably, the comparison reveals that both decoders
yield identical results for these codes, suggesting a comparable level of performance in error correction. This
finding underscores the robustness of EL-BoostDec, showcasing its effectiveness alongside a state-of-the-art
SA decoder in the context of BCH codes.
REFERENCES
[1] K. Knight, “Decoding complexity in word-replacement translation models,” Computational linguistics, vol. 25, no. 4, pp. 607–615,
1999.
[2] E. R. Berlekamp, R. J. McEliece, and H. C. A. van Tilborg, “On the inherent intractability of certain coding problems,” IEEE
Transactions on Information Theory, vol. 24, no. 3, pp. 384–386, 1978, doi: 10.1109/TIT.1978.1055873.
[3] I. S. Reed, X. Yin, T. K. Truong, and J. K. Holmes, “Decoding the (24,12,8) Golay code,” IEE Proceedings E: Computers and
Digital Techniques, vol. 137, no. 3, pp. 202–206, 1990, doi: 10.1049/ip-e.1990.0025.
[4] I. S. Reed and T. K. Truong, “Algebraic decoding of the (32,16,8) quadratic residue code,” IEEE Transactions on Information
Theory, vol. 36, no. 4, pp. 876–880, 1990, doi: 10.1109/18.53750.
[5] T. K. Truong, X. Chen, and X. Yin, “The algebraic decoding of the (41, 21, 9) quadratic residue code,” IEEE Transactions on
Information Theory, vol. 38, no. 3, pp. 974–986, 1992, doi: 10.1109/18.135639.
[6] C.-H. Chien, “Developing efficient algorithms of decoding the systematic quadratic residue code with lookup tables,” International
Journal of Operations Research, vol. 13, no. 4, pp. 165–174, 2016.
[7] E. Nachmani, Y. Be’Ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in 54th Annual Allerton
Conference on Communication, Control, and Computing, Allerton 2016, IEEE, Sep. 2017, pp. 341–346. doi:
10.1109/ALLERTON.2016.7852251.
[8] C. I. Imrane, N. Said, B. El Mehdi, E. K. A. Seddiq, and M. Abdelaziz, “Machine learning for decoding linear block codes: case of
multi-class logistic regression model,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 24, no. 1, pp. 538–
547, 2021, doi: 10.11591/ijeecs.v24.i1.pp538-547.
[9] E. Nachmani, E. Marciano, D. Burshtein, and Y. Be’ery, “RNN decoding of linear block codes,” arXiv-Computer Science, pp. 1-7,
2017.
[10] M. S. E. K. Alaoui, S. Nouh, and A. Marzak, “Two new fast and efficient hard decision decoders based on hash techniques for real
time communication systems,” in Lecture Notes in Real-Time Intelligent Systems, Cham: Springer, 2019, pp. 448–459. doi:
10.1007/978-3-319-91337-7_40.
[11] S. -I. Chu, S. -A. Ke, S. -J. Liu, and Y. -W. Lin, “An efficient hard-detection GRAND decoder for systematic linear block codes,”
in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 31, no. 11, pp. 1852-1864, Nov. 2023, doi:
10.1109/TVLSI.2023.3300568
[12] S. Nouh, A. El Khatabi, and M. Belkasmi, “Majority voting procedure allowing soft decision decoding of linear block codes on
binary channels,” International Journal of Communications, Network and System Sciences, vol. 05, no. 09, pp. 557–568, 2012, doi:
10.4236/ijcns.2012.59066.
[13] H. Boualame, I. Chana, and M. Belkasmi, “New efficient decoding algorithm of the (17, 9, 5) quadratic residue code,” in
Proceedings - 2018 International Conference on Advanced Communication Technologies and Networking, CommNet 2018, IEEE,
Apr. 2018, pp. 1–6. doi: 10.1109/COMMNET.2018.8360258.
[14] R. Sahu, B. P. Tripathi, and S. K. Bhatt, “New algebraic decoding of (17, 9, 5) quadratic residue code by using inverse free berlekamp-
massey algorithm (IFBM),” International Journal of Computational Intelligence Research (IJCIR), vol. 13, no. 8, pp. 2015–2027, 2017.
[15] L. Niharmine, H. Bouzkraoui, A. Azouaoui, and Y. Hadi, “Simulated annealing decoder for linear block codes,” Journal of
Computer Science, vol. 14, no. 8, pp. 1174–1189, Aug. 2018, doi: 10.3844/jcssp.2018.1174.1189.
[16] I. A. Joundan, S. Nouh, and A. Namir, “Design of good linear codes for a decoder based on majority voting procedure,” in 2016
International Conference on Advanced Communication Systems and Information Security, ACOSIS 2016 - Proceedings, IEEE,
2017. doi: 10.1109/ACOSIS.2016.7843918.
[17] B. Aylaj and M. Belkasmi, “Simulated annealing decoding of linear block codes,” in Proceedings of the Mediterranean Conference
on Information & Communication Technologies 2015: MedCT 2015 Volume 1, Springer International Publishing, 2016, pp. 175–
183. doi: 10.1007/978-3-319-30301-7_19.
[18] S. El K. Alaoui, Z. Chiba, H. Faham, M. El Assad, and S. Nouh, “Efficiency of two decoders based on hash techniques and syndrome
calculation over a Rayleigh channel,” International Journal of Electrical and Computer Engineering, vol. 13, no. 2, pp. 1880–1890,
Apr. 2023, doi: 10.11591/ijece.v13i2.pp1880-1890.
[19] X. Ruan, “Deep learning algorithms for BCH decoding in satellite communication,” Highlights in Science, Engineering and
Technology, vol. 38, pp. 1104–1115, 2023, doi: 10.54097/hset.v38i.6012.
[20] X. Chen and M. Ye, “Neural decoders with permutation invariant structure,” Journal of the Franklin Institute, vol. 360, no. 8, pp.
5481–5503, 2023, doi: 10.1016/j.jfranklin.2023.03.024.
[21] D. Khebbou, I. Chana, and H. Ben-Azza, “Decoding of the extended Golay code by the simplified successive-cancellation list
decoder adapted to multi-kernel polar codes,” Telkomnika (Telecommunication Computing Electronics and Control), vol. 21, no.
3, pp. 477–485, 2023, doi: 10.12928/TELKOMNIKA.v21i3.23360.
[22] H. Boualame, M. Belkasmi, and I. Chana, “Efficient decoding algorithm for binary quadratic residue codes using reduced
permutation sets,” Journal of Computer Science, vol. 19, no. 4, pp. 526–539, Apr. 2023, doi: 10.3844/jcssp.2023.526.539.
[23] G. Kunapuli, Ensemble methods for machine learning, Shelter Island, New York: Simon and Schuster, 2023.
[24] K. W. Walker, “Exploring adaptive boosting (AdaBoost) as a platform for the predictive modeling of tangible collection usage”,
The Journal of Academic Librarianship, vol. 47, no. 6, 2021, doi: 10.1016/j.acalib.2021.102450.
[25] M. Elghayyaty, A. Hadjoudja, O. Mouhib, A. El Habti, and M. Chakir, “Performance Study of BCH error correcting codes using
the bit error rate term BER,” International Journal of Engineering Research and Applications, vol. 7, no. 2, pp. 52-54, 2017, doi:
10.9790/9622-0702025254.
BIOGRAPHIES OF AUTHORS
A new efficient decoder of linear block codes based on ensemble learning methods (Mohammed El Assad)
2246 ISSN: 2252-8938
Bouchaib Aylaj received his joint Ph.D. degree in computer science from faculty
of sciences, Chouaib Doukkali University, and ENSIAS, Med.V University, Morocco in
2016. He is currently an assistant professor in the CRMEF-Rabat Morroco. His research
interests include artificial intelegence, error control coding for digial communications, digital
signal processing, and embedded systems. He can be contacted at email:
[email protected].