0% found this document useful (0 votes)
23 views4 pages

Blind Identification of Convolutinal Codes Based On Veterbi Algorithm

Uploaded by

lomeshsahugate17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views4 pages

Blind Identification of Convolutinal Codes Based On Veterbi Algorithm

Uploaded by

lomeshsahugate17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2020 International Conference on Computer, Control, Electrical and Electronics Engineering (ICCCEEE)

Blind Identification of Convolutinal Codes Based


2020 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE) | 978-1-7281-9111-9/20/$31.00 ©2021 IEEE | DOI: 10.1109/ICCCEEE49695.2021.9429628

on Veterbi Algorithm
1st Abusabah I. A. Ahmed 2nd Abdeldafia Mohammed 3rd Abdelaziz Y M Abbas
Engineering College Engineering College Engineering College
Karary University Karary University Al-Mugtarbein University
Khartoum, Sudan Khartoum, Sduan Khartoum, Sudan
[email protected] [email protected] abdelazizyousif @ mu.edu.s

Abstract- In most digital transmission systems, the data


stream is encoded, and then sent in a channel. In many
applications such as military or spectrum surveillance need to
extract information from a non-cooperative context.
Unauthorized terminal needs to estimate the parameters of the
code. In an intercepted data, an interceptor has only
access to the noised transmission he has intercepted. Blindly
identification of convolutional encoder coded message
parameters based on calculating the rank of matrices formed
from noisy intercepted data is achieved successfully. Viterbi
algorithm (VA) is used to decode the intercepted streams with
estimated parameters using MATLAB. The simulation results
have shown that the effectiveness and performance of the
Fig. 1. Block diagram of an information transmission or storage system
parameter estimation for convolutional codes of 1/2 rate is
better than the performance of rate 1/3 .
The general task of a channel coding round simply is the
encoding the knowledge or raw data sent over a data link in
Keywords— Convolutional Codes (CC), Rate 1/2 CC,
a secure mode. To guarantee the absence of hacking and
Parameters Estimation, Decoding Algorithm, Viterbi
sending error by detection and correction. There are two
Algorithm.
coding methods: first one is the Forward Error Correction
(FEC), the second is the Backward Error Correction (BEC).
I. INTRODUCTION The second one requires only error detection. If a slip-up is
Channel coding is an important technique applied for the detected, the sender is requested to retransmit the message.
safety of the information storage and transmission. Error While this method is easy and sets lower requirements on
Correcting Codes (ECC) is a useful part of mathematics and the code’s error-correcting properties, it's to need duplex
informatics in the channel coding area. For the high communication and causes undesirable delays in
reliability of communication through a noisy medium we transmission.
need to have a trustful ECC [1]. Adding the parity bits to the
original message bits makes the size of coded message II. CHANNEL CODING TYPES
longer [2]. A block diagram for a secure information
systems, considering all levels from coding in source, In this section the most important kinds of channel coding
transmission, receiving end and decoding stage is shown are covered, Convolution Coding (CC) and Block Coding
in Fig.1. The net output of the proposed securing process (BC). The procedures of CC is to deal with the applications
described in this work is the message at destination end. The at real time, single bit or some bits at a time. CC is a
channel encoder transforms the information sequence u suitable methodology, applied for secure and safe
into a code word [1]. The sequence of demodulator outputs information mediums and backup systems. The original
corresponding to the resulted encoded sequence v is called source will be decoded at receiving end for sent message.
the received sequence and here we a sign it as r see Fig.1. To avoid the re- transmission of knowledge bits similarly on
The channel decoder omit a transformation of the received free error (detected and corrected), during the transmission
sequence r into some binary sequence û which process [3, 4]. CC channel coding methodology applied in
represents the estimated information sequence decoded at this paper for its truthful and reliable results.
the destination end.
All the recent multimedia applications nowadays depend A. Block Codes
on the secure data links which employs sufficient coding Block Codes (BC) are powerful channel code technique.
technique. Authorized terminals have authorities to deal, by The ( n , k ) BC codes are widely implemented in literature,
editing, controlling, and resending the network information
and data. Beside the known principles of channel while n is the code word, k indicates the number of source
encoding/decoding procedures, the decoding applied in this information bits. The generator matrix G in (1) is
work relies also on the defined channel noise characteristics. ( k × n ) order applied for the code generation.

978-1-7281-9111-9/20/$31.00 ©2020 IEEE

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on September 13,2022 at 03:18:23 UTC from IEEE Xplore. Restrictions apply.
G = [ I k P ] k× n (1) Now we can define the Parity Check Matrix to be:

I k represents an identity matrix of k × k order, P is a = [ P T I n - k ] ( n - k )× n (2)


k × ( n - k ) generate the code preferable properties.
The resulted error is
T
S = RH (3)

Then,
R=C+E (4)
From matrix G we can get the collection of code words in
T T T T T
RH = (C E ) H = CH EH = EH (5)
Table I:
If no error, S = 0 , R ≡ C and D is the first k bits of
Table I. CODEWORD GENERATED BY G R . For the received code word
Messages (G ) Code words (C ) R = [ 0 1 1 1 1 0 ] , RH T = [ 0 1 1 1 1 0 ] , H T = [1 0 1]
T
which is the second row of H .
000 000000

001 001110 B. Convolution Codes (CC)


CC are applied in many data processing areas for secure and
0 10 010101 reliable transmission. Its difference from BC is that the
output is just depends on the current input bits, no memory
011 011011
while in CC the timely output depends on both the present
100 100011
and the past output bits. Applying CC one need memory
unit beside normal registrars which help in storing bits for a
101 101101 while for future use. CC with parameters ( n , k , m )
indicates that the coding procedures derived n outputs
110 110110
from k inputs using m previous inputs. [5].
111 111000
III. ( 2 ,1, 3 ) CONVOLUTIONAL ENCODER
A binary CC can be generated by transmitting the
In particular when D = [ 0 1 1] information through a linear finite-state shift register
denoted by ( n , k , m ) . Fig. 2 shows encoding of a word
101 in ( 2 ,1, 3 ) convolutional codes. The code word is 11
10 00 10 11. In common way used parameters range from 1
to 8 for n , 2 to 10 for k . Convolutional encoder (rate1/2,
k=3) shown in Fig. 3.

Fig. 2. (2, 1, 3) Convolutional Codes

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on September 13,2022 at 03:18:23 UTC from IEEE Xplore. Restrictions apply.
4. Estimation of the code generator polynomial.

The method for blind estimation based on Euclidean


algorithm is applied because its suitable for rate 1/2 also can
be generalized for rate 1/3 [6].
If the code polynomials c 1 ( x ) and c 2 ( x ) are known,
seeking g 1 ( x ) and g 2 ( x ) hold

g 2 ( x )c 1 ( x ) + g 1 ( x )c 2 ( x ) = a ( x ) (6)

a ( x ) represents information polynomial, estimating the


Fig. 3. (2, 1, 3) Binary Convolutional Encoder code synchronization ( d ) , if d = 0 the received code if
from beginning to end without missing or d = 1 on
The codes developer called it by ( n , k , l ) , where the information bit missed. Then by iteration the code generator
quantity l is the constraint length of the designed code and matrix can be calculated. Figure 4 shows that rate 1/3
is defined from the relation D = k ( m - 1) . The convolutional codes code bit length estimation
has better performance than rate 1/2 convolutional codes.
predesigned l form the number of bits in the encoder
memory that related to the formation of the n output bits Up to 3 × 10 -3 input BER the correct identity rate of code
during the coding process [7]. Having the information bit length for rate 1/3 convolutional codes is over 87%, for
rate 1/2 is under 83%.
sequence given by u = ( u 0 , u 1 , u 2 ,...) through the
encoder every time slot one bit at a time will give two Figure 5 shows rate 1/3 convolutional codes generator
polynomial estimation has better performance than rate 1/2
output sequences v ( 1 ) = ( v 0 (1 ) v 1 (1 ) , v 2 (1 ) ,...)
convolutional codes. Up to 3 × 10 -3 input Bit Error Rate
and v (2)
= ( v 0 ( 2 ) v 1 ( 2 ) , v 2 ( 2 ) ,...) . (BER) the correct identity rate of generator polynomial for
rate 1/3 convolutional codes is over 76%, for rate 1/2 is
This process is resulted as the convolution of the given as under 68%.
input sequence u and two encoder impulses. By impulse
responses we mean the sequences derived when putting an 100
input sequence u = (1 0 0 ...) . Dependent on time series
CORRECT IDENTITY RATE OF CODE BIT LENGTH

RATE 1/2
RATE 1/3

of the memory, the impulse responses duration will equal to 95

m + 1 units, and they are written as below:


90
g (1 ) = ( g 0 (1 ) g 1 ( 1 ) , g 2 (1 ) ,..., g m (1 ) )
85

g ( 2 ) = ( g 0 ( 2 ) g 1 ( 2 ) , g 2 ( 2 ) ,..., g m ( 2 ) )
80
The obtained encoder equation of the binary ( 2 , 1, 3 ) code
is look like: 75

g ( 1 ) = (1 0 1)
g ( 2 ) = (1 1 1)
70
0 0.5 1 1.5 2 2.5 3
BER -3
x 10
The resulted encoder equations can now be written as:
Fig. 4. Correct identity rate of code bit length
(1 )
v = u Θ g (1 )
CORRECT IDENTITY RATE OF GENERATOR POLYNOMIAL

100
RATE 1/2
v (2) = u Θ g (2) RATE 1/3
90
Where Θ indicates discrete convolution.
80
IV. ESTIMATION OF CONVOLUTIONAL CODES PARAMETERS
In this section the principle of the blind identification 70

method is discussed for the case where the intercepted


60
sequence is corrupted in noiseless case.
The convolutional codes parameters which will be 50
estimated in this work are:
1. Estimation of code length, which will lead to 40
0 0.5 1 1.5 2 2.5 3
decrease computation. BER
x 10
-3

2. Estimation of code synchronization ( d ) .


Fig. 5. Correct identity rate of generator polynomial
3. Estimation of code input bit length ( k ) .

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on September 13,2022 at 03:18:23 UTC from IEEE Xplore. Restrictions apply.
Table II shows elapsed time to estimate code bit length, 100
code input bit length, and generator polynomial of rate1/n. RATE 1/2
RATE 1/3

CORRECT IDENTITY RATE OF DECODING


Rate 1/2 has less estimation time than rate 1/3 as shown in 90
table.
80

Table II. Comparison of Estimation time of Different Convolutional Code


Rates for 1/n code rate group 70

Number of Code Bits Parameters Estimation Time in Seconds 60

50

Rate 1/2 Rate 1/3


40
0 0.5 1 1.5 2 2.5 3
10000 BER
x 10
-3

2.8888 3.0205
Fig. 6. Correct Identity Rate of Decoding for Rate 1/n

12000 3.1107 3.2335


VI. CONCLUSIONS AND DISCUSSION
Comparing the estimation results for rate 1/n
14000 3.0533 3.6800
convolutional codes, shows that rate 1/3 parameters
estimation take more time than rate1/2. Because we used
16000 3.9118 4.2007 same algorithm and divide the intercepted stream to deal
with it as two streams rate 1/2 convolutional codes. The
18000 3.5775 4.4704 parameters estimation of convolutional codes for 1/n code
rate group performs well when input BER is less
V. VITERBI ALGORITHM than 3 × 10 -3 .
The obtained results show that, this algorithm can
The most popular algorithm in which receiver blindly estimate convolution code parameters successfully
receives a small amount stream, and needs to search out the to solve the problem of information interception, so it’s
source stream. The posterior probability of every bit are recommended to be used in the information interception
often found by the sum product algorithm. The Field. The obtained results show that, this algorithm can
foremost probable state sequence is found as shortest path blindly estimate convolution code parameters successfully
through a weighted graph. The channel is assumed to be a to solve the problem of information interception, so it’s
binary symmetric channel and also the received vector recommended to be used in the information interception
is capable a code word. Field. Continuing with the work done in this research the
A. Implementation procedures following directions can be useful in future.
The implementation of Viterbi algorithm simply relies
on three steps [8]: REFERENCES
1. The calculation of a distance between the possible [1] Md. Noor-A-Rahim, M. O. Khyam, Yong Liang Guan, G. G. Md.
“ideal” pairs of logic two bits (“00”, “01”, “10”, “11”) Nawaz Ali, Khoa D. NguyenGottfried, ‘'Delay-Universal Channel
and given input pair of bits. Coding with Feedback”, IEEE Acces, vol. 6, 2018, pp. 37918- 3793.
2. Define encoder state, this mean calculate a metric for the [2] F. Wang, Z. Huang, Y. Zhou, A. method for blind recognition of
survivor path. convolution code based on Euclidean algorithm, in Proceedings of the
3. Store just one bit decision when one survivor path is International Conference on Wireless Communications, Networking
and Mobile Computing, 1414–1417 (2007).
selected from the 2.
[3] C. E. Shannon, ``A mathematical theory of communication,'' Bell
B. Decoding Using MATLAB System Technical Journal, vol. 27, pp. 379-423 and 623-656, July and
October, 1948.
The Viterbi Decoder in MATLAB decodes input [4] T. Venugopal;S. Radhika, "Survey in Channel Coding in Wireless
symbols to supply binary output symbols. Using vitdec Networks”, 2020 International Conference on Communication and
function for decoding intercepted stream with estimated Signal Processing (ICCSP), Chennai, India, 2020, pp. pp. 0784-
0789, doi: 10.1109/ICCSP48568.2020.9182213.
parameters can be accomplished in MATLAB and the
response of this function will give the decoded message. [5] Nathaly Orozco Garzón;Henry Carvajal Mora;Celso de Almeida,
“Performance Evaluation of Encoded Opportunistic Transmission
The decoding results for convolutional codes 1/n code rate Schemes”, IEEE Access, Vol. 7, 2019, pp 89316- 89329.
group shown in form of percentage number of errors in [6] Sina Vafi, “ Cyclic Low Density Parity Check Codes with the
decoded sequence in Fig. 6. Optimum Burst Error Correction”, IEEE Access, vol. 8, 2020, pp.
192065- 192072.
In 1/n code rate group convolutional codes rate 1/3 [7] Hirata LHC. New Rate2 Compatible Punctured Convolutional Codes
Performs better than rate 1/2 in decoding and has no errors for Viterbi Decoding. IEEE Trans on Com., 994, 42 (12), pp. 3073-
when BER is less than 3 × 10 -3 3079.
[8] ZHOU Ya-jian, LIU Jian, A Blind Recognition of the (n, n-1, m)
Convolution Code, Journal of Beijing University of Posts and
Telecommunications, Jun.2010 Vol. 33 .

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on September 13,2022 at 03:18:23 UTC from IEEE Xplore. Restrictions apply.

You might also like