0% found this document useful (0 votes)
7 views42 pages

Convolutional Codes

This document covers the fundamentals of convolutional codes, including encoding techniques, generator matrices, and graphical representations such as state, tree, and trellis diagrams. It discusses the differences between convolutional codes and other types of coding, as well as maximum likelihood decoding and the Viterbi algorithm. Key concepts such as rate, constraint length, and the importance of memory in convolutional encoders are also explained.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views42 pages

Convolutional Codes

This document covers the fundamentals of convolutional codes, including encoding techniques, generator matrices, and graphical representations such as state, tree, and trellis diagrams. It discusses the differences between convolutional codes and other types of coding, as well as maximum likelihood decoding and the Viterbi algorithm. Key concepts such as rate, constraint length, and the importance of memory in convolutional encoders are also explained.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Information Theory & Coding

Unit 5
Convolution codes: Encoding convolutional codes, Generator matrices for convolutional codes, Generator polynomials for convolutional

codes, Graphical representation of convolutional codes, Viterbi decoder.


Syllabus
• Convolutional Codes
• Encoding
• Time and frequency domain approach
• State , tree and trellis diagram
• Transfer function and minimum free distance
• Maximum likelihood decoding
• Viterbi Algorithm
• Sequential Coding
Convolutional coding
• Type of Channel Coding
Convolutional coding
• What is difference between block codes, cyclic
codes and convolutional codes
• To get high redundancy -memory locations
should more-digital communication s/m will
complex. Block Codes and
Cyclic Codes
• Using Less number of memory locations-high
redundancy can be achieved-complexity will
be less Convolutional coding
Convolutional coding
• (n,k,m ) Convolutional Encoder
Convolutional codes

k input n
output (n,k,m)

m bits in memory
CONVOLUTIONAL CODES
• An (n,k,m) convolutional code can
be implemented with a k-input, n-output
linear
sequential circuit with input memory m.
• Convolutional code differs from block codes in
that
– Encoder contains memory
– At any given time the n-outputs depends on the k-
inputs at that time and m previous input blocks.
• n and k are small integers with k<n
• m must be large for low error probability.
Example 1: (2,1,3) Convolutional
Encoder

Implemented by Linear feed forward Shift register

are generator sequences


Encoding Equations
• Encoding equations: denotes
discrete
convolution

• Convolution operation: For any l ≥ 0 and

r example 1,
• The codeword is given by,
Example 1
Generator Matrix of (n,k,m)
Convolutional Encoder
Example 1
Transform Domain Representation
• convolution operations in time domain can be
replaced by more convenient polynomial
multiplications in transform domain.
• For (2,1,m) Convolutional Encoder .
Transform Domain Representation

In general
Transform Domain Representation
For Example 1
Example 2: (3,2,1) Convolutional
Encoder
Example 2: (3,2,1) Convolutional
Encoder
Example 2: (3,2,1) Convolutional
Encoder
Example 2: (3,2,1) Convolutional
Encoder
Question
• Draw (3,1,2) Convolutional Encoder
Assume G(D) = (1+D,1+D2,1+D+D2)
Rate and Constraint Length of
Convolutional Code
• Rate of Convolutional Encoder = k/n
• Constraint Length (K) is defined as the number
of shifts over which a single message bit can
influence the encoder output.
• That is K shifts are required for a message bit
to enter the shift register and finally come
out.
• K=m+1
Question
• Draw ½ rate efficiency and constraint length 4
convolutional encoder: Example 1
State Diagram
• Preparation of State Transition Table
State Diagram Representation
(2,1,3) Convolutional Encoder
Example 1
Question
Consider a (2,1,2) Convolutional code with

– Draw the encoder diagram.


– Draw the state diagram
Code Tree or Tree Diagram
• The Tree Diagram adds the dimension of time
to state diagram.
• Code Tree can be explained with the help of
following Example
• Draw Convolutional Encoder with constraint
length 3 and rate ½ with g(1)=(1 1 1) and g(2)=
(1 0 1). Get state Transition Table.
Draw Convolutional Encoder with
constraint length 3 and rate ½ with
(1) (2)
g =(1 1 1) and g = (1 0 1).Example 2
Convolutional Encoder with constraint
length 3 and rate ½. Example 2
Input bit Present State Next State Output Bits
m0 m1 m0 m1 V(1) V(2)
0 0 0 0 0 s0 0 0
1 0 0 1 0 s1 1 1
0 01 10 00 01 s2 11 1

1 0
1 0 1 1 s3 0
0 1

1 0 1 1 0 0 0
0 1 1 0 1 0 1
1 1 1 1 1 1 0
Code Tree or Tree Diagram
Code Tree or Tree Diagram

Message: 11011
Code Word:
11 01 01 00 01
Code Tree or Tree Diagram
• Input 0 specifies the upper branch and input 1
specifies lower branch.
• Output binary symbols are indicated on the
branch.
• A specific path in the tree is traced from left to
right in accordance with the input message
sequence.
• After first m+1 branches, tree becomes
repetitive.
TRELLIS DIAGRAM
• Trellis Diagram is the extension of state
diagram in time.
Trellis Diagram
Trellis Diagram
• The Trellis contains L+K levels, where L is the length
of the incoming message sequence, and K is
constraint length of the code.
• The levels of Trellis are labeled as j=0,1,……,L+K-1
• Input 0 is drawn as a solid line. Input 1 is drawn as a
dashed line.
• For message, 10011 output sequences 11 10 11 11
01.
Maximum Likelihood Decoding
• Consider code word c is generated from message
u.
• At the receiver side r is received. From r , c and u
are to be estimated.
• Take p(r/c) conditional probability of receiving r
given that v was sent.
• The log likelihood function equals log p(r/c)
• Maximum Likelihood decoder choose the
estimate of c for which the log likelihood function
log p(r/c) is maximum.
Maximum Likelihood Decoding
For BSC,
Maximum Likelihood Decoding
• N log(1-p) is constant for all c.
• Maximum Likelihood decoding rule for BSC,
can be restated as choose estimate of c
minimizes the hamming distance between
received vector r and transmitted vector c.
Viterbi Algorithm For BSC
 It is maximum likelihood decoding algorithm
 For BSC, decoding rule is based on minimum hamming distance.
Step 1: Starting at level j=0, compute the partial hamming distance
received code word and the code word received for the single path
entering each node. Store the path and its hamming distance for
each state.
Step 2:Increment the level j by 1.Compute the partial hamming
distance for all paths entering a state by adding the branch
hamming distance entering that state to the hamming distance of
the connecting states at the preceding time unit. For each state,
store the path with the smallest hamming distance, together with
the hamming distance and eliminate all other paths.
Step 3: If j< L+m repeat step 2.Other wise stop.

You might also like