0% found this document useful (0 votes)
100 views

Convolutional Code

convolutional codes.....................................................................................tree diagram...................................................................................trellis diagram

Uploaded by

Dhivya Lakshmi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

Convolutional Code

convolutional codes.....................................................................................tree diagram...................................................................................trellis diagram

Uploaded by

Dhivya Lakshmi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Convolutional Coding and Decoding

Zhong Gu Dec. 4, 2000 CrpE537 Fall 2000 Instructor Dr. Russell Steve

Why use channel Coding?


The channel is not ideal. Propagation can introduces errors to the signal caused by path loss and path fading. So channel coding is introduced to overcome these problems.

Structure of Convolutional Code


At every instant, k bits are inputted to the register and k bits moved out and n bits are encoded and outputted K is called constraint length of convolutional code R=k/n is coding rate Every output is related to modulo-2 adder

Figure 1. Convolutional Encoder

An simple convolutional code


Here the adder vector of output bit 0 is [1 0 0]; The adder vector of output bit 1 is [1 0 1]; The adder vector of output bit 2 is [1 1 1];

Figure 2. K=3, k=1, n=1 convolutional encoder

Tree diagram
If the input is a 0, then the upper branch is followed, elsewhere, the lower one is followed. So its easy to find the output code given the input sequence. Output bit sequence is determined by the current input bit and the two previous bits. Current output sequence is determined by the input bit and the four possible states of the register, which are showed in figure 3.

Figure 3. Tree Diagram for K=3, k=1 encoder

Trellis
Trellis is a more compact expression of the convolutional coding and can be generated by merging the nodes with the same label. Solid line denotes the output is generated by input bit 0 and dotted line shows that the ouput is generated by input bit 1. Every node has two input paths and two output paths. The input bit determines which output path will be followed.

Figure 4. Trellis diagram for rate 1/3, K=1 convolutional code

State Diagram
The three bits in the transition branch denotes the output sequence and the dotted lines show the transition triggered by 1 and solid lines show the transition triggered by 0.

Figure 5: State diagram of a convolutional encoder

Maximum likelihood decoding


When the encoded information is transmitted over the channel, it is distorted. The conlvolutional decoder regenerated the information by estimating the most likely path of state transition in the trellis. Maximum likelihood decoding means the decoder searches all the possible paths in the trellis and compares the metrics between each path and the input sequence. The path with the minimum metric is selected as the output. So maximum likelihood decoder is the optimum decoder In general, a convolutional Code CC(n, k, K) has 2(K-1)k possible states. At a sampling instant, there are 2k merging paths for each node and the one with the minimum distance is selected and called surviving path. At each instant, there are 2(K-1)k surviving paths are stored in the memory. When all the input sequence are processed, the decoder will select a path with the minimum metric and output it as the decoding result. In the real systems, the input sequence is very long. It is showed that it is long enough to make the trellis depth L> (5*K ) and decode only the newest message bit within the Viterbi trellis.

Viterbi Algorithm
1.Calculate branch metrics The branch metric at time instant j for path, is defined as the joint probability of the received n-bit symbol conditioned on the estimated transmitted n-bit symbol :
m j = ln P ( r ji | c ji ) = ln P ( r ji | c ji )
i =1 i =1 n n

2.Calculate path metrics


The path metric for path , at time instant J is the sum of the branch metrics along this path.
M = m j
j =1 J

3. Information sequence update At each instant, there are 2k merging paths for every node. The decoder select the one with the largest metric as the survivor.
max(M 1 , M 2 , ! , M

2k

4. Output the decoded sequence At instant J, the (J-L)th information symbol is output from the memory with the largest metric.

Use Viterbi algorithm to decode


Let m(s,t) represent the metric of state s at time t.

1.Initially, all state metrics are zeros, i.e. m(0,0) = m(1,0) = m(2,0) = m(3,0) = 0. 2.For every state, there are two entering branches, called upper branch and lower branch respectively. Variables M_upper (s, t) and M_lower (s, t). are used to stand for Hamming distance between the current input information bits and expected information bits which cause the state transition to s at time t. 3. Compare M_upper (s, t) + m(s*, t-1) and M_lower (s, t) + m(s*, t-1), where s is the state at time t, and s* is the pervious state at time (t-1) for a given branch. Choose the branch with the small value as the surviving branch entering the state at time t and let m(s, t)= this value. That is: if M_upper (s, t) + m(s*, t-1) < M_lower (s, t) + m(s*, t-1) The upper branch is the surviving branch and m(s,t)= M_upper (s, t) + m(s*, t-1) Otherwise, the lower branch is the surviving branch and m(s,t)= M_lower (s, t) + m(s*, t-1) Repeat above steps until all the input data are calculated at time T. 4. Compare m(0, T) m(1,T) m(2, T) m(3, T), choose the minimum one and trace back the path from this state. 5. The output data can be generated corresponding to the path

A decoding example

Simulation with SNR=-9.54


The origina l s igna l 1 0.5 0 -0.5 -1 0 1 0.5 0 -0.5 -1 0 5 10 15 20 5 10 15 20 2 1.5 1 0.5 0 0 5 10 15 20 The de code d s igna l The tra ns mitte d s igna l a fte r e ncoding with i 10 5 0 -5 -10 0 5 10 The e rror 15 20

Simulation with SNR=0 db K=6


The origina l s igna l 1 0.5 0 -0.5 -1 0 1 0.5 0 -0.5 -1 0 5 10 15 20 5 10 15 20 1 0.5 0 -0.5 -1 0 5 10 15 20 The de code d s igna l The tra ns mitte d s igna l a fte r e ncoding with nois e 2 1 0 -1 -2 0 5 10 The e rror 15 20

Relationship of SNR and Error rate


The re la tions hip of e rror ra te 0.5 0.45 0.4 0.35 Error ra te 0.3 0.25 0.2 0.15 0.1 0.05 0 0 -5 -10 -15 -20 -25 -30 S NR -35 -40 -45 -50

Conclusion
Convolutional encoding can be used to improve the performance of wireless systems. Viterbi algorithm is an optimum decoder. Using convolutional coding, the information can be extracted without any error from noisy channel if the SNR is big enough. When the SNR is lower than some value, there will be some error and the error rate increases as the SNR decreases. When the SNR is lower than some certain value, the convolutional encoder can not extract the information. Question?

You might also like