Digital Communication UNIT 2
Digital Communication UNIT 2
UNIT -2
By: Nihal Kumar
In electronics and telecommunications, pulse shaping is the process of changing the waveform of
transmitted pulses. Its purpose is to make the transmitted signal better suited to its purpose or the
communication channel, typically by limiting the effective bandwidth of the transmission. By filtering
the transmitted pulses this way, the intersymbol interference caused by the channel can be kept in
control. In RF communication, pulse shaping is essential for making the signal fit in its frequency band.
Transmitting a signal at high modulation rate through a band-limited channel can create intersymbol interference.
As the modulation rate increases, the signal's bandwidth increases. When the signal's bandwidth becomes larger
than the channel bandwidth, the channel starts to introduce distortion to the signal. This distortion usually
manifests itself as intersymbol interference.
The signal's spectrum is determined by the pulse shaping filter used by the transmitter. Usually the transmitted
symbols are represented as a time sequence of dirac delta pulses. This theoretical signal is then filtered with the pulse
shaping filter, producing the transmitted signal. The spectrum of the transmission is thus determined by the filter.
In many base band communication systems the pulse shaping filter is implicitly a boxcar filter. Its Fourier transform
is of the form sin(x)/x, and has significant signal power at frequencies higher than symbol rate. This is not a big
problem when optical fibre or even twisted pair cable is used as the communication channel. However, in RF
communications this would waste bandwidth, and only tightly specified frequency bands are used for single
transmissions. In other words, the channel for the signal is band-limited. Therefore better filters have been developed,
which attempt to minimise the bandwidth needed for a certain symbol rate
it needs to satisfy certain criteria. The Nyquist ISI criterion is a commonly used criterion for evaluation, because it
relates the frequency spectrum of the transmitter signal to intersymbol interference
By: Nihal Kumar
Examples of pulse shaping filters that are commonly found in communication systems are:
Sender side pulse shaping is often combined with a receiver side matched filter to achieve
optimum tolerance for noise in the system. In this case the pulse shaping is equally distributed
between the sender and receiver filters. The filters' amplitude responses are thus pointwise square roots
of the system filters.
Other approaches that eliminate complex pulse shaping filters have been invented. In
OFDM, the carriers are modulated so slowly that each carrier is virtually unaffected by the
Sinc filter
It is also called as Boxcar filter as its frequency domain equivalent is a rectangular shape. Theoretically the best pulse
shaping filter would be the sinc filter, but it cannot be implemented precisely. It is a non-causal filter with relatively
slowly decaying tails. It is also problematic from a synchronisation point of view as any phase error results in steeply
increasing intersymbol interference.
Raised-cosine filter
The raised-cosine filter is a filter frequently used for pulse-shaping in digital modulation due to its
ability to minimise intersymbol interference (ISI). Its name stems from the fact that the non-zero
portion of the frequency spectrum of its simplest form (β=1) is a cosine function, raised up to sit
above the f (horizontal) axis .Raised-cosine filters are practical to implement and they are in wide
communication systems can choose a trade off between simpler filter and spectral efficiency.
Gaussian filter
In electronics and signal processing, a Gaussian filter is a filter whose impulse response is a Gaussian function (or an
approximation to it). This behavior is closely connected to the fact that the Gaussian filter has the minimum possible
group delay. It is considered the ideal time domain filter, just as the sinc is the ideal frequency domain filter.[1]hese
properties are important in areas such as oscilloscopes[2] and digital telecommunication systems.[3]
Disadvantages:
(a) an ideal LPF is not physically realizable.
(b) Note that
SCRAMBLING
The statistics of the input bits can sometimes bring about degradation in a dig- ital
transmission system. For instance, a long sequence of 1s or 0s may cause the bit
synchronizer to lose synchronization momentarily and thereby causing a
long burst of erroneous bits. Another example is when a sequence of periodic
patterns of 1s and 0s creates discrete spectral lines and that in turn may cause
difficulty in bit synchronization, as the bit synchronizer may lock falsely to one of
them. Scrambling is a method of achieving dc balance, increasing the period of a
periodic input, and eliminating long sequences of 1s and 0s to ensure tim- ing
recovery.
Although line coding is a safer method of achieving these objectives, scram-
bling is attractive and often used on channels with extreme bandwidth con-
straints as scrambling requires no bandwidth overhead. A prime example of
such channels is low-bandwidth twisted-pair telephone lines; to this effect,
all the ITU-T standardized voice-band data modems incorporate scrambling.
In fact, full-duplex modems using echo cancellation employ different
By: Nihal Kumar
where h(1), . . . , h(n) are feedback taps, and each may be a zero (i.e., no feed-
back connection) or a one (i.e., direct connection of the shift register output to
modulo-2 summation), and the operation denotes modulo-2 addition. The
number of 1s generated in one cycle of the output sequence is one greater than
the number of 0s. The autocorrelation of the pseudorandom sequence has a
peak equal to the sequence length 2n 1 at multiples of the sequence length.
At all other shifts, the autocorrelation is 1. The correlation property of a pseu-
dorandom sequence results in a flat power spectral density as the sequence
length increases (i.e., by increasing the sequence length, the output bits become
less correlated).
Pseudorandom Scrambler
...
...
...
...
bit sequence c(k) and descrambler output bit sequence b^ðkÞ are, respectively, as
follows:
cðkÞ ¼ bðkÞ xðkÞ
(8.2)
b^ðkÞ ¼ c^ðkÞ xðkÞ
where b(k) and c^ðkÞ are the scrambler input bit sequence and descrambler input
bit sequence, respectively. Note that the correct operation depends on the align-
ment in time of the two maximal-length sequences of period r in the scrambler
and descrambler. The scrambler must be reset by the frame synchronization;
if this fails, a complete frame is left descrambled and significant error pro-
pagation thus results. Pseudorandom scrambling is used in a high burst-rate
time-division-multiple-access based satellite system, which includes a frame
alignment signal to enable such synchronization to take place.
By: Nihal Kumar
Self-Synchronizing Scrambler
As shown in Figure a self-synchronizing scrambler at the transmit end scram-bles
by performing a modulo-2 addition of the input bit sequence with a
sequence formed from its own previous scrambled bits and a self-synchronizing
descrambler at the receiver descrambles by performing a modulo-2 addition of
the received bit sequence with a sequence formed from its own past received
bits. The scrambler output bit sequence c(k) and descrambler output bit
sequence b^ðkÞ are, respectively, as follows:
cðkÞ ¼ bðkÞ hð1Þcðk 1Þ hð2Þcðk 2Þ . .. hðnÞcðk nÞ
(8.3)
b^ðkÞ ¼ c^ðkÞ hð1Þ^
cðk 1Þ hð2Þ^
cðk 2Þ . .. hðnÞ^
cðk nÞ
where b(k) and c^ðkÞ are the scrambler input bit sequence and descrambler input
bit sequence, respectively. Also, in order to minimize the probability of lock up
ð¼ 2n Þ, i.e., the probability of when an output period is equal to the input
period for one particular shift register’s initial state, n is chosen to be large.
When the input to the descrambler c^ðkÞ is different from the output of the scram-
bler c(k), due to a transmission error, additional errors are caused. For the num-
ber of non-zero taps K, the error multiplication is K + 1. Therefore, the scrambler
can also be used as an error-rate detector for low error rates. If the scrambler is
driven by all ones, any zeros in the descrambler output correspond to channel
...
...
...
...
FIGURE (a) A self-synchronizing scrambler and (b) a self-synchronizing descrambler.
By: Nihal Kumar
errors. Since a single channel error results in K + 1 output errors, one only needs
to count the number of zeros in the descrambler output and divide by K + 1 to
determine the error rate. In practice, we usually have 2 K 4, as such the addi-
tional degradation due to error propagation is usually considered negligible.
Error propagation stops when the descrambler has full of correct bits.
EXAMPLE
Consider a simple three-stage self-synchronizing scrambler, where its output and input are
related as follows:
(a) Assuming error free-transmission, show that the descrambled data is identical to the
original data sequence.
(b) Assuming an initial state (111), determine the scrambler output for an all zero input.
Solution
Using the output of the scrambler can be determined.
(a) For this scrambler, the output of the descrambler is then as follows:
With no errors in transmission, we have c^ðkÞ ¼ cðkÞ, and accordingly we have b^ðkÞ ¼ bðkÞ,
as reflected below:
1 0 1 1 1 0
2 0 0 1 1 1
3 0 1 0 1 1
4 0 0 1 0 1
5 0 0 0 1 0
6 0 1 0 0 1
7 0 1 1 0 0
8 0 1 1 1 0
The scrambler output has period 7 ¼ 23 1 . Note that should there be an error in transmission,
the error multiplication factor is 3, since we have K ¼ 2.
By: Nihal Kumar
Eye Diagram
• Eye diagram is a means of evaluating the quality of a received ―digital waveform‖
• By quality is meant the ability to correctly recover symbols and timing
• The received signal could be examined at the input to a digital receiver or at some
stage within the receiver before the decision stage
• Eye diagrams reveal the impact of ISI and noise
• Two major issues are 1) sample value variation, and 2) jitter and sensitivity of
sampling instant
• Eye diagram reveals issues of both
• Eye diagram can also give an estimate of achievable BER
• Check eye diagrams at the end of class for participation
In Digital communication, we apply input as binary bits which are converted into symbols and
waveforms by a digital modulator. These waveforms should be unique and different from each
other so we can easily identify what symbol/bit is transmitted. To make them unique, we apply
Gram-Schmidt Orthogonalization procedure.
Now consider that we have a waveform s1(t) and we assume that its energy is ε1. Then we can
construct our first waveform as:
So now we have our first waveform which has energy = 1. Now we have our second waveform
available known as s2(t) But it may or may not be orthogonal to ψ1(t) So, it is necessary to make
it orthogonal and that’s where gram-schmidt comes in handy. The procedure tells us that to
make s2(t) orthogonal to ψ1(t) we compute its projection onto space spanned by ψ1(t) and to
compute its projection we first find its scaling number which when multiplied by ψ1(t) will give
the projection. The scaling number is found as:
Then we multiply the scaling factor c21 with ψ1(t) and subtract the product from s2(t) :
d2(t)=s2(t)–c21ψ1(t)
At this stage you might ask is d2(t) orthogonal to ψ1(t) the answer is yes, it is orthogonal to ψ1(t)
but its energy is not = 1 so it is not orthonormal. Hence that is why we didn’t call it ψ2(t) . Now
to make it orthonormal we simply divide d2(t) by its energy ε2:
where,
Up till now we have orthogonalize 2 waveforms and we can go on and on by using the above
procedure. Simply we can now write in general form for upto kth waveform:
By: Nihal Kumar
Hence we can orthogonalize all waveforms which are normally M waveforms by using this
procedure. The benefit of orthogonalizing these waveforms is that they don’t overlap with each
other and are easy to identify at the demodulation side. Now we would give some examples so you
have clear idea on how to orthogonalize this:
Fun Fact: Identity Matrix is already orthogonlized so if you apply gram-schmidt on orthogonal
vectors the scaling factor will turn out to be zero.