0% found this document useful (0 votes)
62 views34 pages

Ec6501 DC Ece Notes

The document discusses digital modulation schemes and error control coding. It begins by describing geometric representations of signals and defining basis vectors, signal space, and orthonormal basis. It then explains binary phase-shift keying (BPSK) modulation including generation and detection. Frequency-shift keying (FSK) and quadrature phase-shift keying (QPSK) are also introduced. Finally, the document defines block codes, describing how they operate on blocks of bits and are referred to as (n,k) codes, where n is the total number of bits and k is the number of information bits. Key coding concepts like Hamming distance and weight are explained.

Uploaded by

Akila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views34 pages

Ec6501 DC Ece Notes

The document discusses digital modulation schemes and error control coding. It begins by describing geometric representations of signals and defining basis vectors, signal space, and orthonormal basis. It then explains binary phase-shift keying (BPSK) modulation including generation and detection. Frequency-shift keying (FSK) and quadrature phase-shift keying (QPSK) are also introduced. Finally, the document defines block codes, describing how they operate on blocks of bits and are referred to as (n,k) codes, where n is the total number of bits and k is the number of information bits. Key coding concepts like Hamming distance and weight are explained.

Uploaded by

Akila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-IV DIGITAL MODULATION SCHEME

Geometric representation of Signals:

Derive Geometrical representation of signal.

Basis Vectors

The set of basis vectors {e1, e2, …, en} of a space are chosen such that: Should be
complete or span the vector space: any vector a can be expressed as a linear combination
of these vectors.

Each basis vector should be orthogonal to all others


 Each basis vector should be normalized:
 A set of basis vectors satisfying these properties is also said to be a complete
orthonormal basis
 In an n-dim space, we can have at most n basis vectors

Signal Space

Basic Idea: If a signal can be represented by n-tuple, then it can be treated in much the
same way as a n-dim vector.
Let φ1(t), φ2(t),…., φn(t) be n signals
Consider a signal x(t) and suppose that If every signal can be written as above ⇒ ~ ~ basis
functions and we have a n-dim signal space

Orthonormal Basis

Signal set {φk(t)}n is an orthogonal set if

If cj≡1 ∀ j ⇒ {φk(t)} is an orthonormal set.

Consider a set of M signals (M-ary symbol) {si(t), i=1,2,…,M } with finite energy. That is

Then, we can express each of these waveforms as weighted linear combination of


orthonormal signals

where N ≤ M is the dimension of the signal space and are called the orthonormal basis
functions

Let, for a convenient set of {ϕj (t)}, j = 1,2,…,N and 0 ≤ t <T,

Now, we can represent a signal si(t) as a column vector whose elements are the scalar
coefficients
sij, j = 1, 2, ….., N :
These M energy signals or vectors can be viewed as a set of M points in an N – dimensional
Euclidean space, known as the „Signal Space’. Signal Constellation is the collection of M
signals points (or messages) on the signal space.
GENERATION AND COHERENT DETECTION OF BPSK SIGNALS

(i) Generation
To generate the BPSK signal, we build on the fact that the BPSK signal is a special case of
DSB-SC modulation. Specifically, we use a product modulator consisting of two
components.

(i) Non-return-to-zero level encoder, whereby the input binary data sequence is encoded in
polar form with symbols 1 and 0 represented by the constant-amplitude.

(ii) Product modulator, which multiplies the level encoded binary wave by the sinusoidal
carrier of amplitude to produce the BPSK signal. The timing pulses used to generate the level
encoded binary wave and the sinusoidal carrier wave are usually, but not necessarily,
extracted from a common master clock.

(ii) Detection
To detect the original binary sequence of 1s and 0s, the BPSK signal at the channel output
is applied to a receiver that consists of four sections
(a) Product modulator, which is also supplied with a locally generated reference signal that
is a replica of the carrier wave
(b) Low-pass filter, designed to remove the double-frequency components of the
product modulator output (i.e., the components centered on ) and pass the zero-
frequency components.
(c) Sampler, which uniformly samples the output of the low-pass filter at where; the local
clock governing the operation of the sampler is synchronized with the clock responsible
for bit-timing in the transmitter.
(d) Decision-making device, which compares the sampled value of the low-pass filters
output to an externally supplied threshold, every seconds. If the threshold is exceeded,
the device decides in favor of symbol 1; otherwise, it decides in favor of symbol 0.
levels.
Generation and Detection:-
FSK Transmitter

FSK receiver

A binary FSK Transmitter is as shown, the incoming binary data sequence is applied to on-
off level encoder. The output of encoder is √ volts for symbol 1 and 0 volts for symbol “0‟.
When we have symbol 1 the upper channel is switched on with oscillator frequency f1,
for symbol “0‟, because of inverter the lower channel is switched on with oscillator
frequency f2. These two frequencies are combined using an adder circuit and then
transmitted. The transmitted signal is nothing but required BFSK signal. The detector consists
of two correlators. The incoming noisy BFSK signal x(t) is common to both correlator. The
Coherent reference signal ᶲ1 (t) & ᶲ2 (t) are supplied to upper and lower correlators
respectively.
The correlator outputs are then subtracted one from the other and resulting a random vector
“l” (l=x1 - x2). The output “l” is compared with threshold of zero volts.

If l > 0, the receiver decides in favour of symbol 1.


l < 0, the receiver decides in favour of symbol 0.

FSK Bandwidth:
• Limiting factor: Physical capabilities of the carrier
• Not susceptible to noise as much as ASK

• Applications
– On voice-grade lines, used up to 1200bps
– Used for high-frequency (3 to 30 MHz) radio transmission
– used at higher frequencies on LANs that use coaxial cable.

Therefore Binary FSK system has 2 dimensional signal space with two messages S1(t) and
S2(t), [N=2 , m=2] they are represented,
Fig. Signal Space diagram of Coherent binary FSK system.

QUADRATURE PHASE – SHIFT KEYING (QPSK)

In a sense, QPSK is an expanded version from binary PSK where in a symbol consists of two
bits and two orthonormal basis functions are used. A group of two bits is often called a
“dibit”. So, four dibits are possible. Each symbol carries same energy. Let, E: Energy per
Symbol and T: Symbol duration = 2.* Tb, where Tb: duration of 1 bit.

Fig. (a) QPSK Transmitter

Fig. (b) QPSK Receiver


Fig. QPSK Waveform

In QPSK system the information carried by the transmitted signal is contained in the phase.

QPSK Receiver:-

The QPSK receiver consists of a pair of correlators with a common input and supplied with a
locally generated pair of coherent reference signals ᶲ1(t) & ᶲ2(t)as shown in fig(b).The
correlator outputs x1 and x2 produced in response to the received signal x(t) are each
compared with a threshold value of zero.

The in-phase channel output:

If x1 > 0 a decision is made in favour of symbol 1 x1 < 0 a decision is made in favour of


symbol 0.

Similarly quadrature channel output:


If x2 >0 a decision is made in favour of symbol 1 and x2 <0 a decision is made in favour of
symbol 0 Finally these two binary sequences at the in phase and quadrature channel outputs
are combined in a multiplexer (Parallel to Serial) to reproduce the original binary sequence.
Probability of error:-

A QPSK system is in fact equivalent to two coherent binary PSK systems working in parallel
and using carriers that are in-phase and quadrature. The in-phase channel output x1 and the Q-
channel output x2 may be viewed as the individual outputs of the two coherent binary PSK
systems. Thus the two binary PSK systems may be characterized as follows.

- The signal energy per bit √

- The noise spectral density is N0/2

The bit errors in the I-channel and Q-channel of the QPSK system are statistically
independent . The I-channel makes a decision on one of the two bits constituting a symbol (di
bit) of the QPSK signal and the Q-channel takes care of the other bit.

QAM(Quadrature Amplitude Modulation):

• QAM is a combination of ASK and PSK


Two different signals sent simultaneously on the same carrier frequency ie, M=4, 16, 32,
64, 128, 256.

As an example of QAM, 12 different phases are combined with two different amplitudes.
Since only 4 phase angles have 2 different amplitudes, there are a total of 16 combinations.
With 16 signal combinations, each baud equals 4 bits of information (2 ^ 4 = 16). Combine
ASK and PSK such that each signal corresponds to multiple bits. More phases than
amplitudes. Minimum bandwidth requirement of QAM is same as ASK or PSK.
UNIT-V ERROR CONTROL CODING
Block Codes:
Block codes operate on a block of bits.Block codes are referred to as (n, k) codes. A
block of k information bits are coded to become a block of n bits. But before we go any
further with the details, let‟s look at an important concept in coding called Hamming
distance. Let‟s say that we want to code the 10 integers, 0 to 9 by a digital sequence. Sixteen
unique sequences can be obtained from four bit words. We assign the first ten of these, one to
each integer. Each integer is now identified by its own unique sequence of bits.

Hamming Weight: The Hamming weight of this code scheme is the largest number of 1‟s ina
valid codeword. This number is 3 among the 10 codewords we have chosen. (the ones in the
while space).

Concept of Hamming Distance: In continuous variables, we measure distance by Euclidean


concepts such as lengths, angles and vectors.In the binary world, distances are measured
between two binary words by something called the Hamming distance. The Hamming
distance is the number of disagreements between two binary sequences of the same size. The
Hamming distance between sequences 001 and 101 is = 1 The Hamming distance between
sequences 0011001 and 1010100 is = 4. Hamming distance and weight are very important
and useful concepts in coding. The knowledge of Hamming distance is used to determine the
capability of a code to detect and correct errors. Although the Hamming weight of our chosen
code set is 3, the minimum Hamming distance is 1. We can generalize this to say that the
maximum number of error bits that can be detected is
t = dmin –1
Where dmin is Hamming distance of the codewords. For a code with dmin = 3, we can both
detect 1 and 2 bit errors. So we want to have a code set with as large a Hamming distance as
possible since this directly effects our ability to detect errors. The number of errors that we
can correct is given by

Creating block codes: The block codes are specified by (n.k). The code takes k information
bits and computes (n-k) parity bits from the code generator matrix. Most block codes are
systematic in that the information bits remain unchanged with parity bits attached either to
the front or to the back of the information sequence.

* Hamming code, a simple linear block code


* Hamming codes are most widely used linear block codes.
* A Hamming code is generally specified as (2n- 1, 2n-n-1).
* The size of the block is equal to 2n-1.
* The number of information bits in the block is equal to 2n-n-1 and the number of overhead
bits is equal to n. All Hamming codes are able to detect three errors and correct one.

Reed-Solomon Codes: Reed Solomon (R-S) codes form an important sub-class of the family
of Bose- Chaudhuri-Hocquenghem (BCH) codes and are very powerful linear non-binary
block codes capable of correcting multiple random as well as burst errors. They have an
important feature that the generator polynomial and the code symbols are derived from the
same finite field. This enables to reduce the complexity and also the number of computations
involved in their implementation. A large number of R-S codes are available with different
code rates.
An R-S code is described by a generator polynomial g(x) and other usual important code
parameters such as the number of message symbols per block (k), number of code symbols
per block (n), maximum number of erroneous symbols (t) that can surely be corrected per
block of received symbols and the designed minimum symbol Hamming distance (d). A
parity-check polynomial h (X) of order k also plays a role in designing the code. The symbol
x, used in polynomials is an indeterminate which usually implies unit amount of delay.

For positive integers m and t, a primitive (n, k, t) R-S code is defined as below: Number of
encoded symbols per block: n = 2m – 1 Number of message symbols per block: k Code rate:
R= k/n Number of parity symbols per block: n – k = 2t Minimum symbol Hamming distance
per block: d = 2t +1. It can be noted that the block length n of an (n, k, t) R-S code is bounded
by the corresponding finite field GF(2m). Moreover, as n – k = 2t, an (n, k, t) R-S code has
optimum error correcting capability.

Convolutional codes:

*Convolutional codes are widely used as channel codes in practical communication systems
for error correction. *The encoded bits depend on the current k input bits and a few past input
bits. * The main decoding strategy for convolutional codes is based on the widely used
Viterbi algorithm. *Convolutional codes are commonly described using two parameters: the
code rate and the constraint length. The code rate, k/n, is expressed as a ratio of the number
of bits into the convolutional encoder (k) to the number of channel symbols output by the
convolutional encoder (n) in a given encoder cycle. *The constraint length parameter, K,
denotes the "length" of the convolutional encoder, i.e. how many k-bit stages are available to
feed the combinatorial logic that produces the output symbols. Closely related to K is the
parameter m, which can be thought of as the memory length of the encoder. A simple
convolutional encoder is shown below(fig.). The information bits are fed in small groups of
k-bits at a time to a shift register. The output encoded bits are obtained by modulo-2 addition
(EXCLUSIVE-OR operation) of the input information bits and the contents of the shift
registers which are a few previous information bits.

Fig 3.1 . A convolutional encoder with k=1, n=2 and r=1/2

The operation of a convolutional encoder can be explained in several but equivalent ways
such as, by
a) state diagram representation.
b) tree diagramrepresentation.
c) trellis diagram representation.

a) State Diagram Representation: A convolutional encoder may be defined as a finite state


machine. Contents of the rightmost (K-1) shift register stages define the states of the encoder.
So, the encoder in Fig. 3.1 has four states. The transition of an encoder from one state to
another, as caused by input bits, is depicted in the state diagram.Fig. 3.2 shows the state
diagram of the encoder in Fig. 3.1. A new input bit causes a transition from one state to
another.
Fig 3.2 State diagram representation for the encoder
b) Tree Diagram Representation: The tree diagram representation shows all possible
information and encoded sequences for the convolutional encoder. Fig. 3.3 shows the tree
diagram for the encoder in Fig. 3.1. The encoded bits are labeled on the branches of the tree.
Given an nput sequence, the encoded sequence can be directly read from the tree.

Representing convolutional codes compactly: code trellis and state


diagram:

STATE DIAGRAM:
Inspecting state diagram: Structural properties of convolutional
codes:
• Each new block of k input bits causes a transition into new state.
• Hence there are 2k branches leaving each state.
• Assuming encoder zero initial state, encoded word for any input of k bits can thus be
obtained. For instance, below for u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1
1, 1 1) is produced:
- Encoder state diagram for (n,k,L)=(2,1,2) code

- Note that the number of states is 2L+1 = 8.

Distance for some convolutional codes:


Fig.3.3 A tree diagram for the encoder in Fig. 3.1

c) Trellis Diagram Representation: The trellis diagram of a convolutional code is obtained


from its state diagram. All state transitions at each time step are explicitly shown in the
diagram to retain the time dimension, as is present in the corresponding tree diagram.
Usually, supporting descriptions on state transitions, corresponding input and output bits etc.
are labeled in the trellis diagram. It is interesting to note that the trellis diagram, which
describes the operation of the encoder, is very convenient for describing the behavior of the
corresponding decoder, especially when the famous „Viterbi Algorithm (VA)‟ is followed.
Fig. 3.4 shows the trellis diagram for the encoder in Figure 3.1.
Fig.3.4. Trellis diagram for the encoder in Fig. 3.1

Hamming Code Example:

• H (7,4)

• Generator matrix G: first 4-by-4 identical matrix

• Message information vector p

• Transmission vector x

• Received vector r and error vector e


• Parity check matrix H

Error Correction:

• If there is no error, syndrome vector z=zeros

• If there is one error at location 2

• New syndrome vector z is


1. i)Consider a single error correction (7,4) linear code and the corresponding decoding
table.
2. Find the (7,4) linear systematic block code word corresponding to 1101.Assume a
suitable generator matrix.

Solution:
Let

n=7 k=4
q = n-k= 3

code vector G=[Ik: P]

Check matrix C=MP


C1 = m1+m2+m3
C2= m2+m3+m4
C3= m1+m2+m4
C=[010]
Complete code word can be calculated X={M:C}={1 1 0 0 0 1 0}
The parity matrix H=[pT :I] =[I: pT] =

Minimum weight W(X)=3

3. Determine the generator polynomial g(X) FOR A (7,4) cyclic code and fine the code
vector for the following data vector 1010, 1111 and 1000

n=7 k=4
q=n-k=3

To obtain the generator polynomial


(p7+1) =(p+1)(p3+p2+1)(p3+p+1)
Let G(p)= (p3+p+1)
To obtain the generator matrix in systematic form

To determine the code vector

1. code vector for M=1010

X=MG

2. code vector for M=1111


3. code vector for M=1000

4. Assume a (2,1) convolutional coder with constraint length 6.Draw the tree diagram,
state diagram and trellis diagram for the assumed coder
Design block code for a message block of size eight that can correct for single
errors Briefly discuss on various error control codes and explain in detail with one
example for convolution code.

N=2, K=1 AND K=6 (CONSTRAIN LENGHT)


M=K/n=6/2=3, since constrain length k=n*M
3 storage element in shift register
N=2 two output bits
One set k=1 of shift register having 3 storage element the convolutional code structure is easy
to draw from its parameters. First draw m boxes representing the m memory register. Then
draw n modulo-2 adders to represent the n output bits. Now connect the memory registers to
the adders using the generator polynomial
Convolutional codes
k = number of bits shifted into the encoder at one time
 k=1 is usually used!!
 n = number of encoder output bits corresponding to the k0020information bits
 r = k/n = code rate
 K = constraint length, encoder memory
Each encoded bit is a function of the present input bits and their past ones.

Generator Sequence
Convolutional Codes
An Example – (rate=1/2 with K=2)

Trellis Diagram Representation


Encoding Process

Viterbi Decoding Algorithm

Maximum Likelihood (ML) decoding rule


Vitterbi Decoding Algorithm

 An efficient search algorithm


 Performing ML decoding rule.
 Reducing the computational complexity.

Basic concept
 Generate the code trellis at the decoder
 The decoder penetrates through the code trellis level by level in search
for the transmitted code sequence
 At each level of the trellis, the decoder computes and compares the
metrics of all the partial paths entering a node
 The decoder stores the partial path with the larger metric and eliminates
all the other partial paths. The stored partial path is called the survivor.

Viterbi Decoding Process

You might also like