Modem DSP

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

Introduction to Modem Design

Paolo Prandoni, LCAV - EPFL


January 26, 2004

Part 1

Modulation
In order to transmit a signal over a given physical medium we need to adapt the characteristics of the signal to the properties of the medium. In the case of electromagnetic
signals, the main object is to fit the spectrum of the signal into a prescribed bandwidth,
called the passband, and this is accomplished by means of a technique called modulation.
Modulation is performed by multiplying the original signal by a sinusoidal signal called
carrier; the gist of the modulation theorem is that, in so doing, we are actually traslating
the spectrum of the original signal in frequency, recentering it over the frequency of the
carrier. This frequency is chosen according to the physical medium: copper wires, optical fibers, water, deep interstellar space, all require different modulation frequencies since
their useful passbands are located in different portions of the spectrum. The passband of
a communication channel is, roughly speaking, the part of the spectrum which behaves
linearly for transmission; there, we can rely on the fact that a properly modulated sinusoidal signal will be received with only phase and amplitude distortions, and these are
good types of distortion that we know very well how to handle.
The passband of a physical channel is of finite width, so we must make sure that the
bandwidth of the original signal prior to modulation is of the same size as the channels
passband. In other words, we must build a signal with a finite, prescribed spectral support.
Intuitively speaking, the wider the support, the more information we can send over the
channel. A big effort in designing a modem is trying to squeeze as much information as
possible over the relatively narrow passband of the telephone channel, as we will see. The
operation of limiting the bandwith of a digital communication signal goes under the name
of pulse shaping and is basically a linear filtering operation.
To illustrate what modulation is all about, take the example of AM radio. The AM
band extends from 530 KHz to 1700 KHz and each radio station is allowed by law to
transmit over an 8 KHz frequency slot in this range. Assume we want to transmit speech
3

Part 1.

Voice

Microphone

Lowpass Filter

Modulator

11111111111
00000000000

bla bla
bla

Figure 1.1: A simple AM radio transmitter.


over AM and we are given a slot from fmin = 650 KHz to fmax = 658 KHz, with the
bandwidth W = fmax fmin equal to 8 KHz; the speech signal s(t), obtained with a
microphone, has a wideband spectrum which spans several KHz; we can however filter it
through a lowpass filter with cutoff frequency 4 KHz without losing too much quality and
thus reduce its spectral width to 8 KHz. The filtered signal has now a spectrum extending
from 4 to 4 KHz; by multiplying it by a sinusoid at frequency fc = (fmax + fmin )/2 =
654 KHz, we can shift it to the allotted AM band according to the modulation theorem:
F

s(t) S(f )

F 1
s(t)cos(2fc t) (S(f fc ) + S(f + fc )).
2

(1.1)

The scheme of the redio transmitter is displayed in Figure 1.1.


A third constraint on the transmitted signal limits its actual power. This constraint
stems from the physical limitations under which the medium behaves linearly (imagine
cranking the volume up on your stereo); in the case of telephone lines, it is also strictly
regulated by the telephone companies who dont like their wires to be burned. Power
constraints are going to affect the transmission scheme always with respect to the ubiquitous noise which degrades the transmitted signal. The keyword here is the signal to noise
ratio (SNR), which is a measure of how good the channel is: the higher this figure, the
better the received signal. We only have control over the power of the signal, the noise
being a fact of nature which simply is there. We can therefore try to improve the SNR
by augmenting the transmission power but, due to the power limitations inherent to the
channel, we will ultimately have to stop and settle for a given SNR and design our receiver
around that figure.

1.1

The telephone channel

The usable bandwidth of a physical communication channel can be determined by several


factors; it can be fixed by law, as is the case for the different bands of the electromagnetic
spectrum allocated to wireless communications (AM/FM radio, TV, mobile telephony);
it can be limited by the sheer physical properties of the medium (as in the case of fiber

1.1. The telephone channel

twisted pair

CO

Network

CO

Figure 1.2: The Plain Old Telephone System.


communications); or, in the case of telephone lines, it can be dictated by a combination
of (sometimes outdated) practical and economical considerations.
The POTS (Plain Old Telephone System) can be schematized as in Figure 1.2. The
users telephone is connected via a twisted pair (i.e. two plain copper wires) to the nearest
Central Office (CO), usually at a distance in the kilometer range; dialing instructions
imparted by the caller are interpreted by the CO and relayed over via the (world-wide)
telephone network to the CO nearest to the called telephone. To understand the limitations
of a telephone channels bandwidth we have to step back to older, analog times when COs
were made of electromechanical switches and the connections inside the cloud labeled
Network were mostly analog lines spiced up by a lot of operational amplifiers. The first
link of the chain, the twisted pair, has no major bandwidth limitations, given the short
distance it covers1 ; telephone companies however, used to introduce what are called loading
coils in the line to compensate for the attenuation introduced by the capacitive effects of
longer wires in the network. A side effect of these coils was to turn the first link into a
lowpass filter with cutoff frequency around 4 kHz, so that most of the older textbooks
will define the passband of the telephone channel as limited between fmin = 300 Hz and
fmax = 3000 Hz, for a total usable bandwidth W = 2700 Hz. Later on, the network
became mostly digital (and so the COs), yet telephone companies found it convenient not
to expand the nominal bandwidth of the users channel so that many more conversations
could be multiplexed over the same cable or satellite link. The standard sampling rate for
a telephone channel is nowadays 8 KHz and the bandwidth limitations are imposed only
by the antialiasing filters at the CO, for a maximum bandwidth in excess of W = 3400 Hz.
The upper and lower ends of the band are not usable due to possible great attenuations
1

This is the principle behind ADSL, by the way

Part 1.

Data

Mapper

011010
001110

01 101
11001100110101100100110
111100
1110001100001100001111001100

Upsampler
K

Pulse Shaping

Modulator

D/A

11111111111
00000000000

Figure 1.3: The modems digital modulator.


which may take place in the transmission.
Under these bandwith constraints, the telephone channel can be quite accurately described as a linear, low-noise, strictly bandlimited channel. It is also power limited, of
course, but this is however a relatively mild constraint, since a SNR of at least 28 dB
can be assumed in all cases and one of 32-34 dB can be reasonably expected on a large
percentage of individual connections.

1.2
1.2.1

Digital Modulation
Preliminaries

Data trasmission over a physical medium is by definition analog: the universe is analog,
after all2 . However, most of the modulation techniques can and are performed in the
digital domain, and this is the road we will follow, up to a final D/A converter which is
our interface with the outside world.
The scheme of the modulator is shown in Figure 1.3 (please note the similarity with
the AM transmitter of Figure 1.1). We initially have a data source (e.g. a computer)
which produces a raw data stream of, say, R bits per second. This bit stream is split
in consecutive groups of m bits and a circuit called a mapper uniquely maps each of the
possible 2m values of each group into a symbol a. Symbols are basically signal amplitude
values (a simple choice could be the decimal value of the m- bit group in Volts, for instance)
and they belong to a finite alphabet A of size M = 2m . The mapper produces a new symbol
every is T = M/R seconds, and 1/T is called the baud rate of the system.
Call the sequence of symbols at the output of the mapper ai . This sequence is not yet
suitable for transmission; first we need to design its spectral characteristics so that it fits
into the available bandwidth of the channel, and then we have to translate it in frequency
to place it right in the passband of the channel. These functions are performed by a
pulse shaper, which is basically a lowpass filter, and by a modulator. Before we consider
the details of these two components, lets have a look at the constraints imposed by the
channel in the case of digital modulation.
2

Or maybe not. . .

1.2. Digital Modulation

a[0]

a[3K]

a[K]

111
000
000
111
111
000
000
111
000
111
111
000
000
111
000
111
000
111
000
111
000
111
000
111
000
111
000
111
111
000
000
111
000
111
111
000
000
111
000
111
11
00
0
a[2K]
Figure 1.4: The digital symbol sequence.
Since, as we said, in the end we have to produce an analog signal, the last component
of a digital modulator is a D/A converter at a sampling frequency of, say, fs Hertz.
All the digital processing taking place in the modulator is therefore happening at an
internal sampling frequency fs and therefore we can convert the analog specifications for
transmission over the channel into digital frequencies. In particular the upper and lower
bounds of the passband will become max = 2(fmax /fs ) and min = 2(fmin /fs ); the
resulting width of the passband will be = 2((fmax fmin )/fs ) and the center-band
carrier frequency will be c = ((fmax + fmin )/fs ).
In view of the derivations which follow, the actual value of fs is not important at this
time as long as:
fs > 2fmax ; i.e. the sampling frequency is greater than twice the largest frequency
of the channels passband;
fs = K(1/T ) with K integer; i.e. fs is a multiple of the baud rate.
So, for instance, for a telephone channels passband from 300 Hz to 3800 Hz and a baud
rate of 2400, we could choose fs = 9600 Hz.
The data symbols produced by the packetizer come out at a rate of 1/T and, in order
to use them in the signal design system, we must bring them up to the right sampling
rate. Thanks to our smart choice for fs this is accomplished simply by a K-time
upsampler, with K = fs T , as shown in Figure 1.3. In the previous example, for instance,
the upsampling factor would be K = 4. We will denote by a[n] the upsampled symbols
sequence, an example of which is given in Figure 1.4.

1.2.2

Part 1.

The baseband signal: pulse shaping

The baseband signal, so called to tell it apart from the passband signal which is the
baseband signal after modulation, is the output of the pulse shaping filter. Before we
examine what the baseband signal looks like, lets take a closer look at the input to the
system, i.e., at the upsampled data symbols sequence a[n]. Since we dont know a priori
what kind of data are generated by the computer, all we can do is consider the sequence as
a realization of a M -valued discrete random process. We dont really know the statistics of
this process, but it is a reasonable assumption to model it as an iid process with uniform
distribution over the set of symbols i A. We will see later that the alphabet A is
2 . Since the symbols are iid, it
designed so that the process has zero mean and variance A
can be shown that the power spectrum of a[n] is
2
A(ej ) = A
.

(1.2)

This power spectrum is flat over all frequencies and therefore its support is 2. In order
to reduce it to the required value we will try to filter it via a pulse shaping filter.
The pulse shaping filter will be a L-tap FIR filter of impulse response g[n] and frequency
response G(ej ). The filtered symbol sequence is therefore
b[n] =

L1
X

g[m]a[n m]

(1.3)

m=0

and, by a fundamental result of random signal processing theory, its power spectrum is
2
B(ej ) = A
|G(ej )|2 .

(1.4)

How do we choose the shape of S(ej )? First we have to look at what requirements we
have in both the time and the frequency domain.
In the frequency domain, since we want the signals support to be at most , a reasonable request is that G(ej ) acts as a lowpass filter with cutoff frequency /2.
In the time domain, remember that as a result of the convolution operator, equation (1.3) tells us that each sample of the filtered signal can be seen as a weighed average
of L neighboring samples of the symbol sequence. We would like that at least every K
samples, i.e. when the symbols sequence takes the value the symbol value to be transmitted, this value was left untouched, so we can easily retrieve it at the receiver. In other
words, we would like b[Kn] = a[Kn]. If this was not the case, every sample value would
be influenced by several symbol values and we would be in the presence of intersymbol
interference (ISI). To see how we can be ISI-free, we can exploit the commutativity of the
convolution operator and write
b[n] =

+
X
m=

a[m]g[n m];

(1.5)

1.2. Digital Modulation

/3
(a)

/3
(b)

Figure 1.5: Pulse shaper minimal bandwith theorem.


since a[n] 6= 0 only for n multiple of K, a sufficient condition on g[n] to achieve b[Kn] =
a[Kn] is thus

1 n=0
g[Kn] =
(1.6)
0 n 6= 0.
This condition has an important consequence, since it implies that the minimum frequency support of G(ej ) cannot be smaller than [/K, /K] (Nyquists theorem). A
simple proof of this fact can be outlined using a standard result from multirate signal
processing. Assume the spectrum G(ej ) is nonzero only over [b , b ], for b < /K;
then we know that we can subsample g[n] by at least a factor of K without aliasing, and
the resulting spectrum is going to be the spectrum of the original g[n] stretched by K. In
other words, the spectrum of the sequence g[Kn] is going to be nonzero over [Kb , Kb ]
and zero elsewhere (see Figure 1.5). But the sequence g[Kn] is by definition simply an
impulse in zero and we know that its spectrum is identically one over [, ], therefore
we have a contradiction.
Now lets backtrack a little bit: we have just seen that in order to achieve ISI-free
transmission at a baud rate of 1/T , the required support of the baseband signal is at least
[/K, /K], with fs = K(1/T ), for a total bandwidth of at least 2/K. However, the
bandwidth is not a design parameter, but a constraint of the channel! Remember that we
have to fit the signal to the channel, not the other way around. So, in practice, given a
maximum bandwith of , we will have to select a suitable baud rate so that the support
of the pulse fits in [/2, /2]; in other words, we must have
/K < /2.

(1.7)

Now, regardless of the values for fs and K, = 2(W/fs ) and therefore, as long as we
choose fs = K(1/T ), the above relation becomes
1/T < W.

(1.8)

10

Part 1.

1.2

1.2

1
1
=0.5
0.8
=0.9
0.8

=0.0
0.6

0.6

0.4

0.2
0.4
0
0.2
0.2

0.8

0.6

0.4

0.2

(a)

0.2

0.4

0.6

0.8

0.4

10

20

30

40

50

60

70

(b)

Figure 1.6: Raised cosine pulse shaper: (a) frequency response, (b) impulse response.

This is the first and most important rule of modem design: the baud rate is always less
than the width of the passband.
So now we know the maximum baud rate; how do we choose a G(ej ) which satisfies (1.6) and acts as a lowpass with support within [/K, /K]? Initially we could think
of choosing G(ej ) as close as possible to a brickwall filter over [/K, /K] (note that
the brickwall filter is ISI-free). There is however a very important tradeoff in filter design,
which we can sum up by saying that the sharper the transition bands are, the longer
the relative impulse response. A long impulse response is a response which dies out very
slowly in time; this is bad for two reasons: first, since we must use FIR filters for stability
reasons, we would need to use a lot of coefficients, which increases the computational load.
Secondly, a lot of FIR coefficients imply a long processing delay at both transmitter and
receiver, which is something we should strive to reduce, instead. Fortunately there is a
type of pulse shape which can help, called the raised cosine pulse. We will not go into the
details of its derivations, but this pulse, while fulfilling (1.6), has also a knob, in the
form of a parameter 0 1, with which to adjust the sharpness of the transition bands;
the resulting minimal bandwidth is increased by a factor of (1 + ). In practical systems
this knob is set such that the total support is only 10% more than the minimal support
[/K, /K], i.e. 0.1. Some of the possible frequency responses are displayed in
figure 1.6, together with the typical bell shape of the impulse response. The decay rate
of the corresponding impulse responses in the time domain is 1/n3 for > 0; this means
that in actual implementations the theoretical infinite duration of the pulse can be very
well approximated by a small number of samples.

80

1.2. Digital Modulation

1.2.3

11

The symbol alphabet

The constellation
So far we havent said much about the alphabet itself, except that it is composed by
M = 2m symbols. The first piece of news about it, and possibly a surprising one at
first, is that the symbols are actually complex values. One might initially object that this
does not makes sense since there is no such thing as a phisical complex entity and the
telephone line is a very physical thing. On the other hand remember that we are designing
a digital modulator, which means that all operations are going to be performed on a DSP
chip, i.e. a computer. A DSP has no problem handling complex variables, since a complex
variable is just a pair of floating point (or fixed point) numbers. So, as long as the final
input to the D/A converter is a real discrete time signal, we are fine using complex-valued
internal processing.
The M complex symbols i A, i = 1, . . . , M , can be represented graphically as
points in the complex plane. The resulting plot is called the modems constellation. One
of the basic constellations is the square grid, displayed in Figure 1.7 for M = 24 ; in this
alphabet thereal and imaginary
parts of the symbols take on the values ofthe odd integers
between ( M 1) and +( M 1) (remember that M = 2m so that M = 2(m/2) . In
formulas,
= I + jQ

(1.9)

with

I,Q {1, 3, 5, . . . , ( M 1)};

(1.10)

I and Q , the real and imaginary parts of a symbol value, are commonly called the
In-phase and Quadrature components. The complex sequence a[n] at the ouptut of the
upsampler is filtered via the pushe shaping filter (digital filters work beautifully for complex
signals too) to yield the complex sequence b[n]. If we need to, we can always separate real
and imaginary parts of the complex signal and write
a[n] = aI [n] + jaQ [n]

(1.11)

and, via the linearity of the filtering operation,


b[n] = g[n] aI [n] + j(g[n] aQ [n]) = bI [n] + jbQ [n].

(1.12)

The fundamental property of a constellation is its average power. It is defined the


variance of a uniformly distributed random variable with probability 1/M taking values
on the alphabet symbols. The variance of a random variable can be interpreted as the
average energy of a symbol, and it is important because the power of the transmitted

12

Part 1.

Figure 1.7: Square constellation, M = 16.


signal is proportional to it as shown in (1.2). Define then a complex random variable
taking values over the square constellation alphabet defined in (1.10) and uniformly
distributed:
P [ = i , i A] = 1/M ;

(1.13)

it is a simple exercise to show that the mean of is zero and that its variance is
2
2
E[2 ] = A
= (M 1).
3

(1.14)

Alphabet size and SNR


To determine the maximum number of symbols in the constellation we have to take into
account the third constraint of modulated transmission, the power constraint. So far we
have considered the spectral properties of the transmitted signal as a random process.
Now we will concentrate on the single symbols one at a time to see what is the influence
of noise on the system and how this affects our ability to send data. We will see in the
notes about demodulation that the receiver attempts to reconstruct a local copy of the
symbol sequence ai as it comes out of the mapper. The transmission however introduces
noise in the process so that, without going into too many details, the copy of the symbol
sequence at the receiver will be
i = ai + ni
a

(1.15)

1.2. Digital Modulation

13

Figure 1.8: Square constellation with the decoding boundaries.


where n is a sampled complex random process (the noise samples) whose statistics are
dependent on the particular characteristics of the channel. The simplest technique to
i is to associate to a
i the alphabet symbol
try to recover the original values ai from a
i in the Euclidean metric (hard decoding). Visually, this can be
A which is closest to a
illustrated by superimposing a grid to the costellation points as in Figure 1.8; the values
i are also plotted with stars, and all the values that fall in a square are
for the noisy a
decoded as the value for the center of the square. An error will occur if either the real
or imaginary part of ni is greater than half the distance between the minimum distance
between two alphabet values; for the alphabet in (1.10) this minimum distance is 2.
The minimum distance between symbols can be increased without altering the shape
of the constellation by multiplying the values of all symbols by a gain factor G0 ; this
is the digital equivalent to a linear amplifier boosting the level of the modems output
signal. The minimum distance becomes then 2G0 and by increasing the gain factor we can
therefore reduce the probability of error. In so doing we are also increasing the average
power of the transmitted signal which, however, can be done only up to the power limit
imposed by the channel.
We can substantiate these arguments in the case of transmission over an ideal channel
which introduces white Gaussian noise only, and we will illustrate the main intuitive points.
If the noise is white and Gaussian, it is going to affect equally and independently both real
and imaginary parts of ai ; the probability density function (pdf) of each complex noise
sample can therefore be described by a two-dimensional Gaussian pdf with zero mean and

14

Part 1.

G0

Figure 1.9: Correct decision region: optimal region in the nearest-neighbor


sense (the square) and its circular approximation.

variance equal to 2 = N0 /2
pn (n) =

2
1 |n|
e N0 .
N0

(1.16)

For the square alphabet as in (1.10), the probability of correctly decoding a symbol is
equal to the probability that the corresponding complex noise sample lies in a square of
side 2G0 centered around the transmitted point; this probability can be approximated well
enough by integrating the noise pdf over a circle C of radius G0 (see figure 1.9) so that
the approximate error probabilty is
ZZ
G2
N0
pe = 1
(1.17)
pn (n) dn = e 0
C

Lets see how this affects the amount of data we can transmit. Fisrt of all, we must
settle on a reliabilty figure which represents the tolerates probability of error of the system;
in practical modems, for instance, a probability of error on the order of pe 106 is
usually the norm. To achieve this probability of error, equation (1.17) tells us that half
the minimum distance between symbols has to be at least
G20 N0 ln(pe );

(1.18)

this gives us the minimum amplification gain we must use for the transmitted signal. On
the other hand, assume that the maximum average power we can transmit over the channel
is Pmax ; the average power P of the transmitted signal is basically the average power of
the symbol sequence scaled by the gain factor (squared since its a power measure):
2
P = G20 A
.

(1.19)

1.2. Digital Modulation

15

We must ensure that P Pmax , and therefore, by substituing for G0 , we have


2
A

Pmax
.
ln(pe )N0

(1.20)

Now, N0 is the averge power of a noise sample (real and imaginary part) and therefore
2 = (2/3)(M
(Pmax /N0 ) is the average signal to noise ratio (SNR) of the channel. Since A
1), we have that for a square constellation it must be
3
SNR
M ( )(
)+1
2 ln(pe )

(1.21)

and the total number of bits per second will be


R (1/T ) log2 (M ).

(1.22)

Modulation
The method for building the baseband signal we have explained so far is called Quadrature
Amplitude Modulation (QAM) and is completed by a last building block, the modulator,
which takes the amplified baseband signal G0 b[n] and multiplies it by a complex sinusoidal
signal at a frequency c (the carrier) to create the passband signal. We know that the
baseband signal is a filtered random process whose power spectrum is given in (1.4),
so we should use results from random signal processing to derive the spectrum of the
passband signal. To make things simpler, however, from now on we will just consider b[n]
as a deterministic signal with Fourier transform B(ej ). This is only partially cheating
since a modem always transmits a finite-length signal (or else you would have to pay
an astronomical phone bill!) and we can always take Fourier transforms of finite-length
sequences.
The modulated signal is
s[n] = G0 b[n]ejc n ;

(1.23)

modulation allows us to shift in frequency the support of the signal so as to fit it into
the slot offered by the channel since, in the frequency domain, the complex version of the
modulation theorem states:
S(ej ) = G0 B(ej(0 ) ).

(1.24)

The signal s[n] is still a complex signal, and therefore we cant transmit it as it is. What
we actually feed to the D/A is, instead,
c[n] = G0 Re{b[n]ejc n }.

(1.25)

16

Part 1.

jw
11111
00000
00000000000000000000000000000000000000
C(ejw) 11111111111111111111111111111111111111

111000111000B(e )
11111
00000
00000000000000000000000000000000000000
11111111111111111111111111111111111111
00000
11111
00000000000000000000000000000000000000
11111111111111111111111111111111111111
111000
11111
00000
00000000000000000000000000000000000000
11111111111111111111111111111111111111
11111111111111111111111111111111111111
00000000000000000000000000000000000000

- c

/2

/2

0110
10

Figure 1.10: Spectral relationships between baseband and passband signals.


Now, remember that for any complex signal x[n] we can write Re{x[n]} = (1/2)(x[n] +
x [n]), where the star denotes the complex conjugate; also remember that the Fourier
transform of x [n] is X (ej ). With this, the spectrum of c[n] is
C(ej ) =
=

1
[S(ej ) + S (ej )]
2
G0
[B(ej(c ) ) + B (ej(c ) )].
2

(1.26)

Since B(ej ) has support [/2, /2], C(ej ) has support {[c /2, c + /2]
[c /2, c + /2]}; the spectral relationship between baseband and passband signals is
depicted in Figure 1.10.

1.3

The Final Recipe

Ok, let review the rules of thumb in designing a modem, and then lets apply them to a
real case scenario.
First of all, we need to know the details of the passband offered by the channel in
terms of it minimum frequency fmin and fmax in Hertz and its width W = fmax fmin .
This will automatically give a bound for the baud rate (1/T ) < W . We will choose an
operative baud rate a little smaller than W to leave room for the raised cosine expansion;
this expansion is usually about 10-20%. We will then choose an internal sampling rate fs
for our transmitter such that fs 2fmax and fs = K(1/T ) for some integer K; usually K
is about 3 or 4 and is dictated by the speed of the DSP chip: the larger K is, the better
we can design our internal filters, but also the larger the number of operations per sample.
Finally, we need to determine the modulation frequency; the typical value for that is the

1.3. The Final Recipe

17

center frequency of the passband, i.e.:


c = 2

fmax + fmin
.
2fs

(1.27)

The amplifyers gain and the maximum number of symbols in the constellation is then
determined from the average SNR using equation (1.21) and, with that, were all set. Easy,
isnt it.
For example...
As a practical example assume a telephone line where fmin = 450 Hz and fmax = 3150 Hz,
with a SNR of about 22 dB. The baud rate will have to be less than 2700 symbols per
second, since W = 2700 Hz, and say we choose the value 2400. We also choose K = 3
to keep computational costs down, so that a good internal sampling frequency satisfying
all requirements is fs = 7200. With this, the maximum support of the baseband signal is
= 2(2700/7200) = 0.75 . We then design a raised cosine filter with cutoff frequency
/3 and 12.5% expansion factor, i.e. = 0.125; the resulting support of the baseband
signal is therefore (1 + )(2/K) = 0.75 , which is just OK. The carrier frequency will be
/2, corresponding to a analog frequency of 1800 Hz. For a probability of error pe < 106
we find that the maximum value for M is approximately 18. For a square constellation,
M should be a perfect square and so we choose M = 16. We have just designed a modem
which, over the given telephone channel, can transmit 9600 bits per second; this is actually
one of the possible operating modes of the V.32 modem standard.

18

Part 1.

Part 2

Demodulation
The signal created at the modulator is converted to a continuous time signal c(t) by a D/A
converter operating at a sampling frequency fs and sent over the telephone channel. With
reasonably good approximation the channel behaves like a linear filter D(ej ) and also
introduces a certain amount of additive noise so that the signal appearing at the receivers
input looks like
v(t) = d(t) c(t tp ) + (t)

(2.1)

where tp is the propagation delay, dependent on the distance between transmitter and
receiver, d(t) is the equivalent impulse response of the channel and n(t) is the noise. The
first thing the digital receiver does is sampling the incoming signal (see Figure 2.1); for
convenience we will assume that the sampling rate used by the receiver matches the internal
sampling frequency fs of the transmitter so that the signal processed by the receiver is
simply
v[n] = v((1/fs )n tp )

(2.2)

A fundamental building block of any modem is an adaptive equalizer whose task is to


estimate the distortion introduced by the channel in order to eliminate it. Since we assume
that the channel acts as a linear filter (with frequency response D(ej ) in the digital
domain) the equalizers job is to estimate D(ej ), compute the inverse filter D1 (ej )
and filter the incoming signal with it. In these notes we will not study how the equalizer
works, but we will simply assume that there is one and that it achieves its goal, so that the
processed signal after the equalizer is distortion-free. Also, we assume that the adaptive
equalizer amplifies or attenuates the incoming signal to a normalized level which we can
assume without loss of generality to be unity. In other words, we are undoing the gain
amplification by G0 introduced at the transmitter so that the signal to be demodulated is
v[n] = c((1/fs )n tp ) + [n].

(2.3)
19

20

Part 2.

A/D

Equalizer

Demodulator

Downsampler

D -1 (ejw )

s(t)
Timing
Recovery

a[nK]

Carrier
Recovery

Figure 2.1: Demodulator.

2.1

Basic Demodulation

To study the basics of demodulation we will assume at first that modulator and demodulator are connected back to back, i.e., that there is no channel between them and therefore
no need to convert the signal to and from continuous time. This means that the received
signal has no delay and no added noise or, in other words, that it is identical to the
transmitted signal c[n]:
r[n] = c[n] = Re{b[n]ejc n };

2.1.1

(2.4)

Demodulation

The function of a demodulator is to remove the modulation, obviously, and the first thing
to do is to strip the carrier off the received signal. Now, if we had the full complex signal
s[n] = b[n]ejc n rather than just its real part, carrier removal would be very easy: just
multiply s[n] by ejc n . The question is wether we can reconstruct the complex signal
from its real part.
Consider the sequences b[n] and ejc n ; we can separate their real and imaginary parts
and write
s[n] = (bI [n] + jbQ [n])(Re{ejc n } + jIm{ejc n }).

(2.5)

If we look in turn at the real and imaginary part of s[n] we have


sr [n] = Re{s[n]} = bI [n] Re{ejc n } bQ [n] Im{ejc n }

(2.6)

si [n] = Im{s[n]} = bI [n] Im{ejc n } + bQ [n] Re{ejc n };

(2.7)

it is obviously sr [n] = c[n]. Lets now have a look at the above relations in the frequency
domain. To do that just remember three things:

2.1. Basic Demodulation

21

for any complex number z it is


Re{z}

= (1/2)(z + z )

Im{z} = (j/2)(z z )
where the star denotes the complex conjugate;
for a complex exponential
(ej ) = ej ;
If X(ej ) is the Fourier transform of x[n], then the Fourier transform of x[n]ejn is
simply X(ej() ) (modulation theorem).
With this it is easy to see that
Sr (ej ) =
Si (ej ) =

1
[BI (ej(c ) ) + BI (ej(+c ) ) + jBQ (ej(c ) ) jBQ (ej(+c ) )]
2
1
[jBI (ej(c ) ) + jBI (ej(+c ) ) + BQ (ej(c ) ) + BQ (ej(+c ) )].
2

Now, B{I,Q} (ej ) are the baseband spectra corresponding to the real and imaginary part
of the symbol sequence ak , so their translates in frequency at c do not overlap (see
Figure 1.10). In this case, it is easy to verify that we can write
Si (ej ) = H(ej )Sr (ej )

(2.8)

with

H(e ) =

j 0 <
j
< 0

The filter H(ej ) is called a Hilbert transformer and its impulse response is

2 sin2 (n/2)
n 6= 0
h[n] =
n
0
n=0

(2.9)

(2.10)

The Hilbert transformer is an ideal noncausal filter, just like the rect and sinc filters and so
it is impossible to implement it exactly; however, good FIR approximation can be obtained
using most of standard FIR design methods. In particular, since the impulse response is

antisymmetric (h[n] = h[n]) and h[0] = 0, the resulting causal approximation h[n]
will

22

Part 2.

be a type III FIR of length 2L + 1, where L is usually on the order of 20 taps. The real
and imaginary parts of s[n] can thus be expressed as
sr [n] = c[n L]
si [n] =

2L
X

h[m]c[n
m]

(2.11)
(2.12)

m=0

where the delay of L samples in the real part compensates for the processing delay inroduced by the causal FIR approximation. Carrier stripping is therefore accomplished by
multiplying sr [n] + jsi [n] obtained above by a locally generated complex sinusoid ejc n

to obtain a local sequence b[n]:


= (sr [n] + jsi [n])ejc n
b[n]

(2.13)

As a final demodulation step, we need to retrieve and decode the original sequence
of complex symbols ai . Remember that, in the modulator, b[n] is the convolution of the
sequence a[n] at the output of the K-time upsampler with the pulse shape g[n] and that
we have designed the pulse shape so that we can recover the values a[n] at times multiples
of K simply by downsampling. The decoded symbol sequence can therefore be obtained
as

i = b[iK]
a

2.1.2

(2.14)

Noise and Slicing

Although we havent worked out the details in full, intuitively any noise added to the
signal before demodulation will leak through the demodulation process and will affect
the values of the decoded symbol sequence. Even in the back to back configuration
introduced above, numerical errors in the digital demodulation process together with the
imprecision errors due to our FIR approximations of ideal filters cause a degradation in
the decoded signal which looks just as if noise had been added. In a real transmission
case, this numerical noise just adds up to the channel noise level. In the end, the received
symbol sequence can be written as
i = ai + ni
a

(2.15)

where ni is a sequence of complex noise samples moving the value of the received symbol
away from its starting value. Graphically, for the constellation of Figure 1.7, the received
symbols for different amounts of additive noise (different SNRs) will look as in Figure 2.2.
A simple way to obtain an estimate of the transmitted sequence ai is via hard slicing:

2.2. Advanced Topics: Synchronization

23

Figure 2.2: Received symbols for increasing noise levels.


i the alphabet element A which is closest to a
i . In other
simply associate to each a
words, the output symbols ai,decoded are defined as
ai,decoded = arg min |
ai |2 ;
A

(2.16)

for the square constellation this becomes simply


ai,decoded = 2b
ai /2c + 1.

(2.17)

As we stated previously, errors will occur if the instantaneous noise value is larger (in
magnitude) than half the minimum distance between constellation points.

2.2

Advanced Topics: Synchronization

Lets now abandon the unrealistic case of back to back transmission and examine one of
the major problems of modem design, synchronization. Correct demodulation requires
the perfect knowledge of two fundamental values, the baud rate and the carrier frequency.
The correct carrier frequency fc (in Hertz) is necessary to convert the passband signal
to a baseband signal via carrier stripping as we just saw before. The baud rate 1/T (or,
equivalently, the symbol period T ) is necessary to know where in the baseband signal to
look for the symbol values; as we saw, in digital modems the internal sampling frequency
is an integer multiple of the symbol frequency, fs T = K, and so we need to know the right
baud rate in order to correctly downsample the demodulated singnal as in (2.14). The
nominal values of these frequencies are obviously known exactly, since they are part of the
technical specification of a modem; so, in theory, we can just choose a sampling frequency
fs multiple of 1/T and use c = 2(fc /fs ), K = fs T . In practice, however, there are
two main factors which complicate things: the imprecision of the samplers clock and the

24

Part 2.

propagation delay introduced by the channel. These two problems are interconnected;
remember that the the signal to be demodulated is
v[n] = c((1/fs )n tp ) + [n],

(2.18)

where tp is the propagation delay. We can always express this delay as an integer number
of symbol intervals plus a fractional delay ranging from T /2 to +T /2, where 1/T is the
baud rate:
tp = DT + ;

(2.19)

the integer D is called the bulk delay. The reason we can do this is because modems send
appropriate timing signals back and forth before transmitting real data to estimate the
bulk delay. Once we know D, the problem reduces to estimating , which is much smaller.
The delay affects both the baseband signal and the carrier: denote by b [n] the baseband
signal b[n] delayed by the noninteger delay ; the received, sampled, and equalized signal
is therefore:
v[n] = Re{b [n]ej(c n+) },

(2.20)

where is the phase offset in the carrier introduced by the propagation delay. In theory
= c , so by estimating we could get an estimate for as well; in practice this is not
done since the precision required on an estimate for is much greater than that required
for . Different circuits with different strategies are used in the two cases, and therefore
we will consider and independently.
Carrier Recovery
Carrier recovery is the modem functionality by which any phase offset in the received
carrier is estimated and compensated for. A phase offset in the carrier frequency causes
a rotation of the constellation points in the complex plane. This is easily seen if we apply
the carrier stripping phases described above to the signal
c[n] = Re{b[n]ej(c n+) };

(2.21)

after downsampling we obtain the sequence (neglecting noise)


i = ai ej
a

(2.22)

where the last term is clearly a rotation factor. Visually, the decoded constellation looks
like in Figure 2.3, where = /20 = 90 . In the same figure, the slicing grid is also plotted:
it is clear that in the rotated constellation, some points are much closer to the decision
boundaries than they should. A smaller amount of noise is therefore sufficient to cause

2.2. Advanced Topics: Synchronization

25

Figure 2.3: Effect of a phase offset on the received symbols.


slicing errors. An even worse situation happens when the local carrier frequency is slightly
different from the transmitters carrier frequency; in this case the phase offset changes over
time and the points in the constellation start to rotate with an angular speed equal to
the difference between transmitters and receivers carrier frequencies. In both cases, data
transmission becomes highly unreliable; carrier recovery is then a very important part of
a modem design.
The most common technique for QAM carrier recovery over well-behaved channels is
a decision directed loop; this works when the overall SNR is sufficiently high, and this is
the case for telephone channels. Consider the picture in Figure 2.4; the received symbol is
which is sufficiently close to the correct value a to be decoded correctly. In
the rotated a
and a respectively.
the z plane, consider the two vectors ~a1 and ~a2 , from the origin to a
Basic linear algebra tells us that the magnitude of their vector product can be expressed
in terms of their components:
|~a1 ~a2 | = Re{
a} Im{a} Im{
a} Re{a};

(2.23)

or in terms of their magnitudes and their relative angle


|~a1 ~a2 | = |~a1 ||~a2 | sin .

(2.24)

We can therefore obtain an estimate for the phase offset by calculating


sin =

Re{
a}Im{a} Im{
a}Re{a}
|~a1 ||~a2 |

(2.25)

26

Part 2.

Figure 2.4: Estimation of the phase offset.


and by assuming that the phase offset is small enough to allow the use of the small angle
approximation for the sine:
sin .

(2.26)

This is the correction phase we must apply to the internal carrier.


As we mentioned before, due to component tolerances and other effects the local carrier
frequency might be slightly different from the nominal value c . This is equivalent to
a phase offset varing slowly over time, so that the offset estimation process must be
performed continuously to lock the internal carrier frequency to the received signal. To
see how this is done, consider how the sinusoidal carrier is generated digitally; the DSP
software will possess a complex exponential function algorithm which computes y = ejw ;
the carrier is generated by initializing the variable w to zero and, at each sample interval,
by computing y = ejw and by incrementing w by the right amount for the next time.
Ideally, at each step, w is incremented by c , the digital frequency so that the algorithm
looks like this:
Initialization:
= c ;
w = 0;
where is the local estimate of the carrier digital frequency.

2.2. Advanced Topics: Synchronization

27

At each step n:
Compute ejw
w w+
If we detect a phase offset, however, we must do two things: fisrt we must compensate
for it in order to rotate back the constellation in place; secondly, we must take into account
the fact that the phase offset might be due to a carrier frequency mismatch, so we must
correct the estimate of the sampling frequency in the correct way. A positive instantaneous
phase offset means our carrier is too fast, a negative offset that our carrier is too slow.
To compensate for possible drifts, however, the carrier generation algorithm is modified
as follows
Initialization;
= c ;
= 0;
w = 0;
where is a correction factor.
At each step n:
Compute ejw
With ejw , demodulate and compute phase offset

w w+
where the last two lines mean that we assume the local carrier is too fast (or too slow)
and we change its frequency accordingly; and that we compensate for the instantaneous
phase drift.

2.2.1

A Digital PLL

Lets now consider what happens if the initial A/D converter of the digital demodulator
exhibits drifts or component tolerances. Consider the simple system shown in Figure 2.5:
an A/D converter at a nominal frequency fs with a continuous time sinusoidal input x(t)
at frequency 2f0 :
x(t) = sin(2f0 t);

(2.27)

28

Part 2.

A/D
x[n]

x(t)
,

Figure 2.5: Simple timing recovery system.


this signal is depicted in Figure 2.6 with a solid line. The output of the A/D converter is
the discrete time signal
x[n] = sin(0 n)

(2.28)

with
0 = 2

f0
.
fs

(2.29)

Assume we design the A/D so that its sampling frequency is fs = f0 ; if the A/D works as
planned, the digital frequency of x[n] will be exactly 0 = 2 and therefore
x[n] = sin(2n) = 0

n;

(2.30)

the sampled signal is plotted in Figure 2.6 with circles. Assume however that due to
component tolerances or temperature drifts the frequency of the A/D moves away from
its nominal value f0 . If fs < f0 it means we are sampling slower than we should and the
the values for x[n] will be the ones marked with a cross in Figure 2.7-(a); conversely, if
fs > f0 then were sampling too fast and x[n] will take on the values marked with a square
in Figure 2.7-(b). Note that, if the sampling frequency is almost right, the A/Ds output
is positive when were sampling slightly too fast and negative if were sampling slightly
too slow. We would like to find a system to lock the sampler at the right speed; since we
know the value of the nominal frequency f0 this should be possible and the idea is to use
the value of the samples to estimate the deviation from the right sampling frequency. The
following derivation follows closely the mechanism implemented for carrier recovery.
Assume we have a mechanism to change the samplers speed; it is convenient to express
this in terms of the sampling time t[n] which is the sequence of time instant at which the
samples are taken. With this notation, the sampled signal can be expressed as
x[n] = x(2f0 t[n]).

(2.31)

2.2. Advanced Topics: Synchronization

29

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

50

100

150

200

250

Figure 2.6: Continuous time sine wave and samples at every period.
1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0.2

0.2

0.4

0.4

0.6

0.6

0.8

0.8

50

100

150

200

250

(a)

50

100

150

200

250

(b)

Figure 2.7: Since wave sampled (a) too slow; (b) too fast.
The sampler computes the each sampling period as
t[n] = t[n] +

(2.32)

where is the current estimate of the right period and is a correction term. Assume
that, initially, = 1/f0 , the right value, and = 0. While this goes on, it will be
x[n] = sin(2f0 n) = sin(2n) = 0.

(2.33)

Now imagine that at time n 1 the samplers speed starts to move slightly away from
its right value, for example imagine that starts to increase. At time n we will therefore
have
x[n] = sin(2f0 (1/f0 )(n 1) + 2f0 ) = sin(2f0 )

(2.34)

30

Part 2.

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

50

100

150

200

250

Figure 2.8: Delayed sine wave.


and, since we can safely assume that the drift is very small, we will notice this speed
increase by the fact that x[n] > 0. Call the small extra time so that = (1/f0 )+. Since
this time is very small (slow drift), we can use the small angle approximation sin(x) x
(with x in radians) so that
x[n] = sin(2f0 ((1/f0 ) + )) = sin(2f0 ) f0 .

(2.35)

To get back on track we now need to do two things: first we need to reduce the sampling
period estimate to avoid repeating the mistake at the next sampling instant; second, we
need to compensate the next sampling interval for the delay we already accumulated with
respect to the correct sampling time. In formulas, the new sampling time will be computed
as:
Estimate from x[n], x[n]/f0 ;
;
t[n + 1] = (t[n] ) + ;

(2.36)

This update obviously works also if the samplers speed slows down, and is repeated at
each sampling interval: look at x[n], update , compute t[n + 1]. In practice, however, for
the updates we use = x[n] where 1 is a damping factor which prevents instabilities
and is a compromise between speed of response of the system and adjustment accuracy.
If is very small, we will be able to adjust t[n] very precisely, but it will take a longer
time since we move little by little; conversely, if is large, we will move faster, but the
sampling time will be oscillating more around its right value.
This kind of system is a particular realization of a Phase-Locked Loop (PLL), in
the sense that it locks the sampling instants of the A/D to the instants in which the

2.2. Advanced Topics: Synchronization

31

x[n] (output error)


1

0.5

0.5

100

200

300

400

500

600

700

500

600

700

Sampling period error (lambda)

x 10

2
1
0
1
2
3

100

200

300

400

Figure 2.9: Timing recovery in action: output error (top panel); sampling
period errror (bottom panel).

instantaneous phase of the input sinusoid is zero. The system works in a feedback fashion,
since it brings back an error signal (the downsampler output) to the systems input (the
A/D).
The system works also when it comes to estimating a delay in the input signal, and
we will just see the intuition behind this. A typical case is shown in Figure 2.8 for < 0;
this delay will cause the downsamplers output to be positive, so the system will start
reducing the sampling estimate according to (2.36) even if the original sampling period
is correct. By sampling slower than we should, we are actually reducing the delay; at
one point, however, we will also find ourselves lagging behind the true sampling instants
because the sampling period has become smaller. So the system will start increasing
again, and its value will oscillate around 1/f0 by means of several adjustments until the
delay is compensated for and the sampling period is the correct one. Figure 2.9 shows a
typical plot of x[n] and 1/f0 for an initial delay = 1/4f0 and an initial estimate 1%
faster = 1.1/f0 .

32

Part 2.

Timing recovery
By timing recovery we designate the operation by which the modem locks its input sampler
to fs = 1/KT , K times the baud rate of the incoming signal. This operation is necessay
since, in order to extract the transmitted symbol values, we need to take baseband signal
values at exactly the right instants, i.e. at exact multiples of T . If the input sampler
works at fs = 1/KT , this is accomplished simply by a K-times downsampler, as we have
seen. If the incoming signal contained a sinusoid at a frequency 2/T , where T is the baud
rate, we could downsample this sinusoid by K, which would move the digital frequency of
the sinusoid to 2/KT and then we could use the PLL described above to perform this
operation. The fact is, the transmitter does not send an explicit sinusoidal signal at that
frequency in order not to waste power and bandwidth; so we have to see if, somehow, we
can find such a reference signal hidden in the transmitted data. Consider the received
signal after the carrier has been stripped off:
X
b[n] =
ai g[n iK];
(2.37)
i

now, if always the same symbol was transmitted, i.e. a[i] = i, then b[n] would be
periodic with period K samples: b[n + hK] = b[n]. In this case the Fourier transform
of b[n] would contain only spectral lines at multiples of 2/K; we could then isolate the
fundamental at 2/K and feed it to the PLL to achieve perfect lock with the baud rate.
The transmitted symbols, however, are not constant. We might be tempted to think that,
at least on average, b[n] might look periodic but this is not the case. If we take the
expectation of the signal, we have
X
E[b[n]] = E[ai ]
g[n iK] = 0
(2.38)
i

since, as explained in the notes about modulation, the set of symbols is designed to have
zero mean. No luck here, then.
The way around this impasse is found by using a fantastic trick which dates back to
the old days of analog radio receivers: process the signal through a nonlinearity. One of
the most common such nonlinearities is the square magnitude, y(z) = |z|2 ; if we process
b[n] with this nonlinearity, on average it will be (remember g[n] is real):
XX
E[|b[n]|2 ] =
{E[ak ai ]g[n iK]g[n iK]}.
(2.39)
k

2 [k i] and
Since we have assumed that ai is an uncorrelated iid sequence, E[ak ai ] = A
therefore
X
2
E[|b[n]|2 ] = A
(g[n iK])2 .
(2.40)
i

2.2. Advanced Topics: Synchronization

33

The last term in the above equation is periodic with period K and this means that, on
average, the squared signal will contain a periodic component at the frequency we need.
By filtering the squared singnal through a bandpass filter centered around 2/K we obtain
a sinusoidal component suitable for tracking by the PLL.

34

Part 2.

Exercises
(Note: For the Matlab execises you will find the necessary M-files and data files following the link
in the main web page for the class.)

Exercise 1
Assume the specifications for a given telepone line are fmin = 300 Hz, fmax = 3600 Hz, and
a SNR of at least 28 dB. Design a set of operational parameters for a modem transmitting
on this line (baud rate, carrier frequency, constallation size). How many bits per second
will you be transmitting?

Exercise 2
In these notes we have only considered square constellations, where the points belong to
a regular grid and their number is a perfect even square (M = 4, 16, 36, 64, etc.). In fact,
many other constellation shapes exist, where points are placed in arbitrary arrangements.
One of the reasons for choosing constellations of different shapes is that the energy of
2 (see equation (1.19)). By
the transmitted signal is proportional to the parameter A
arranging the same number of points in a different manner, we can sometimes reduce
2 , and therefore use a larger amplification gain while keeping the total output power
A
constant, which in turn lowers the probaility of error.
Consider the two 8-point constellation in the following figure. Compute their average
2 . What do you notice?
energy A

(Matlab) Exercise 3
The M-file modulator.m (which is also printed in the last pages of this section) implements
a digital modulator. The syntax is
35

36

Part 2.

(1 + 3)
1

-3

-1

-3

-1

-1

-1
-(1 + 3)
-3

-3

Figure 2.10: Two 8-point constellations.


[c b a] = modulator(N, M, fb, fc)
where N is the total number of symbols to transmit, M is the constellation size (which must
be a perfect even square), fb is the baud rate 1/T and fc is the carrier frequency. The
returned values are c, the transmitted signal, b the complex baseband signal, and a, the
complex symbol sequence. Use the program to transmit with the parameters derived in
Section 1.3. Plot the various signals (hint: use plot(a, o) and axis equal for the
constellation points) and play around with the parameters.
Ok, now that youre familiar with it, write an M-function which demodulates the signal.
To help you in the task, the M-file demodulator.m sets up the necessary filters for you.
Once you have that, you might consider playing around by adding noise to the received
signal and see how that affects the decoded symbol. Also, what happens if you modulate
with one carrier frequency and try to demodulate with a slightly larger frequency?

2.2. Advanced Topics: Synchronization

37

modulator.m
function
%
% N
:
% M
:
% fb
:
% fc
:

[c, b, a] = modulator(N, M, fb, fc)


number of symbols to be transmitted
size of the constellation (must be a square)
baud rate 1/T (Hertz)
carrier frequency (Hertz)

%%%%%%%%%%%%%%%%%%%%

Internal parameters: %%%%%%%%%%%%%%%%%%%%%%%

% Upsampling factor
K = 3;
% Internal sampling frequency:
fs = fb * K;
% Raised cosine filter, beta = 0.1, 19 taps
% NOTE: the raised cosine is designed for K=3
g = [3.5801e-17
9.6667e-02
1.1224e-01 -3.7544e-17 -1.6114e-01 ...
-2.0333e-01
3.8619e-17
4.1178e-01
8.2613e-01
1.0000e+00 ...
8.2613e-01
4.1178e-01
3.8619e-17 -2.0333e-01 -1.6114e-01 ...
-3.7544e-17
1.1224e-01
9.6667e-02
3.5801e-17];

%%%%%%%%%%%%%%%%%%%% Parameters check %%%%%%%%%%%%%%%%%%%%%%%%%%%%


if (fc < fb/2)
error(Carrier frequency must be at least twice the baud rate);
end
if (sqrt(M)/2 ~= round(sqrt(M)/2))
error(M must be a perfect even square);
end
%%%%%%%%%%%%%%%%%%%%

Create random data:

%%%%%%%%%%%%%%%%%%%%%%

% Symbol values along the real and complex axis


symbolSet = -sqrt(M)+1:2:sqrt(M)-1;

38

Part 2.

% Random values between 1 and sqrt(M)


i = floor(sqrt(M) * rand(N,1)) + 1;
q = floor(sqrt(M) * rand(N,1)) + 1;
% Sequence of complex symbols
a = symbolSet(i) + j * symbolSet(q);

%%%%%%%%%%%%%%%%%%%%%%%%

Modulation:

%%%%%%%%%%%%%%%%%%%%%%%%%%

% Upsample symbol sequence


a = kron(a, [1 zeros(1, K-1)]);
% Zero pad for filtering transients
a = [ zeros(1, 20) a zeros(1, 20)];
% Pulse shaping
b = filter(g, 1, a);
% Modulation
bMod = b .* exp(j * 2 * pi * (fc/fs) * (0:length(b)-1));
% Transmission
c = real(bMod);

2.2. Advanced Topics: Synchronization

39

demodulator.m
function
%
% c
:
% fb
:
% fc
:
% fs
:

a = demodulator(c, fb, fc, fs);


transmitted signal
baud rate 1/T (Hertz)
carrier frequency (Hertz)
sampling frequency

%%%%%%%%%%%%%%%%%%%%

Internal parameters: %%%%%%%%%%%%%%%%%%%%%%%

% Downsampling factor
K = fs/fb;
% Carrier digital frequency
omegac = 2*pi * (fc/fs);
%
L
%
h

Hilbert filter, 2L+1 taps


= 19;
the Hilbert filter is antisymmetric, so we need only first half
= [-1.2787e-03
3.1722e-07 -3.2636e-03
9.4594e-07 -7.1031e-03 ...
2.3216e-06 -1.3551e-02
2.7060e-06 -2.3770e-02
3.4028e-06 ...
-3.9568e-02
3.6339e-06 -6.4415e-02
3.2654e-06 -1.0703e-01 ...
2.6130e-06 -1.9945e-01
1.5714e-06 -6.3227e-01];
% Complete impulse response
h = [h 0 -fliplr(h)];

%%%%%%%%%%%%%%%%%%%%%%%%

. . . to be completed . . .

Demodulation:

%%%%%%%%%%%%%%%%%%%%%%%%%%

You might also like