0% found this document useful (0 votes)
77 views11 pages

Appendix A: Error Probability in The Transmission of Digital Signals

This document discusses error probability in the transmission of digital signals over noisy channels. It covers the following key points: 1) Digital signals are transmitted as sequences of amplitude modulated pulses, where each pulse represents a symbol from a discrete alphabet. Noise and interference can cause errors in received symbols. 2) The two main sources of errors are channel noise and inter-symbol interference. This appendix focuses on error probability due to additive white Gaussian channel noise alone. 3) A binary receiver uses a threshold to make hard decisions on received symbols. The bit error rate depends on the probabilities of erroneously detecting a '0' or '1', which are calculated from the noise probability distributions.

Uploaded by

Mamoon Barbhuyan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views11 pages

Appendix A: Error Probability in The Transmission of Digital Signals

This document discusses error probability in the transmission of digital signals over noisy channels. It covers the following key points: 1) Digital signals are transmitted as sequences of amplitude modulated pulses, where each pulse represents a symbol from a discrete alphabet. Noise and interference can cause errors in received symbols. 2) The two main sources of errors are channel noise and inter-symbol interference. This appendix focuses on error probability due to additive white Gaussian channel noise alone. 3) A binary receiver uses a threshold to make hard decisions on received symbols. The bit error rate depends on the probabilities of erroneously detecting a '0' or '1', which are calculated from the noise probability distributions.

Uploaded by

Mamoon Barbhuyan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

OTE/SPH OTE/SPH

JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0


Appendix A: Error Probability in
the Transmission of Digital Signals
The two main problems in the transmission of digital data signals are the effects of channel
noise and inter-symbol interference (ISI) [2, 4]. In this appendix the effect of the channel
noise, assumed to be additive white Gaussian noise (AWGN), is studied, in the absence of
inter-symbol interference.
A.1 Digital Signalling
A.1.1 Pulse Amplitude Modulated Digital Signals
A digital signal can be described as a sequence of pulses that are amplitude modulated. The
corresponding signal is of the form
x(t ) =
k=

k=
a
k
p(t kT) (1)
where coefcient a
k
is the kth symbol of the sequence, such that the coefcient a
k
is one of
the M possible values of the information to be transmitted, taken from a discrete alphabet of
symbols. The pulse p(t ) is the basic signal to be transmitted, which is multiplied by a
k
to
identify the different signals that make up the transmission.
The signal a
k
p(t kT) is the kth symbol that is transmitted at the kth time interval, where
T is the duration of such a time interval. Thus, the transmission consists of a sequence of
amplitude-modulated signals that are orthogonal in the time domain.
As seen in Figure A.1, the data sequence a
k
= A, 0, A, A, 0, A, corresponding to digital
information in binary format (101101), is a set of coefcients that multiply a normalized basic
signal or pulse p(t kT). If these coefcients are selected from an alphabet {0, A}, the digital
transmission is said to have a unipolar format. If coefcients are selected from an alphabet
Essentials of Error-Control Coding Jorge Casti neira Moreira and Patrick Guy Farrell
C
2006 John Wiley & Sons, Ltd. ISBN: 0-470-02920-X
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
328 Essentials of Error-Control Coding
A
0
t k = 2
A p(t 2T) T
Figure A.1 A digital signal
1
0
t
k
p(t kT )
Figure A.2 A normalized pulse at time interval k multiplied by a given coefcient a
k
{A/2, A/2}, the digital transmission is said to have a polar format. In this latter case, the
sequence of this example would be given by a
k
= A/2, A/2, A/2, A/2, A/2, A/2.
Index k adopts integer values from minus to plus innity. As seen in Figure A.2, the basic
signal p(t ) is normalized and of xed shape, centred at the corresponding time interval k, and
multiplied by a given coefcient that contains the information to be transmitted. This basic
normalized pulse is such that
p(t ) =
_
1 t = 0
0 t = T, 2T, . . .
(2)
The normalized pulse is centred at the corresponding time interval k and so its sample value
at the centre of that time interval is equal to 1, whereas its samples obtained at time instants
different from t = kT are equal to 0. This condition does not necessarily imply that the
pulse is time limited. Samples are taken synchronously at time instants t = kT, where k = 0,
1, 2, . . . , such that for a particular time instant t = k
1
T,
x(k
1
T) =

a
k
1
p(k
1
T kT) = a
k
1
(3)
since (k
1
T kT) = 0, for every k, except k = k
1
.
Conditions (2) describe the transmission without ISI, and are satised by many signal pulse
shapes. The classic rectangular pulse satises condition (2) if its duration is less than or
equal to T. The pulse sin c(t ) also satises the orthogonality condition described in the time
domain by equation (2), but it is a pulse that is unlimited in time, however. Figure A.3 shows
the transmission of the binary information sequence (11001), using sin c(t ) pulses modulated
in a polar format. At each sampling time instant t = kT, the pulse being sampled has amplitude
different from 0, while the other pulses are all equal to 0.
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
Appendix A: Error Probability in the Transmission of Digital Signals 329
4 2 0 2 4 6 8 10 12
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Time, t
A
Figure A.3 A digital transmission using sinc (t ) pulses modulated in polar format
Each pulse occurs in a time interval of duration T. The inverse of this duration is the symbol
rate of the transmission, since it is the number of symbols that are transmitted in a unit of time
(usually a second). The symbol rate r is then equal to
r = 1/T (symbols per second) (4)
which is measured in symbols per second. When the discrete alphabet used in the transmission
contains only two symbols, M = 2, then it is binary transmission, and the corresponding
symbol rate r = r
b
is the binary signalling rate
r
b
= 1/T
b
(bit per second) (5)
where T = T
b
is the in time duration of each bit. The binary signalling rate is measured in bits
per second (bps).
A.2 Bit Error Rate
Figure A.4 shows the basic structure of a binary receiver.
Synchronization: T U
N
0
/ 2
x(t )
Low pass
filter H(f )
Sampling
and hold
y(t
k
)
Threshold
x
d
(t)
y(t)
Figure A.4 A binary receiver
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
330 Essentials of Error-Control Coding
The signal x(t ) is a digital signal

k
a
k
p(t kT), that is, an amplitude-modulated pulse
signal. This signal is affected by AWGN noise in the channel and is then input to the receiver.
The rst block in this receiver is a low pass lter that eliminates part of the input noise without
producing ISI, giving the signal y(t ). The receiver takes synchronized samples of this signal,
and generates after the sample-and-hold operation a random variable of the form
y(t
k
) = a
k
+n(t
k
) (6)
Sampled values y(t
k
) constitute a continuous randomvariable Y, and noise samples n(t
k
) taken
from a random signal n(t ) form a random variable n.
The lowest complexity decision rule for deciding the received binary value is the so-called
hard decision, which consists only of comparing the sampled value y(t
k
) with a threshold U,
such that if y(t
k
) > U, then the receiver considers that the transmitted bit is a 1, and if y(t
k
) < U
then the receiver considers that the transmitted bit is a 0. In this way the received sampled
signal y(t
k
) is converted into a signal x
hd
(t ), basically of the same kind as that expressed in
equation (1), an apparently noise-free signal but possibly containing some errors with respect
to the original transmitted signal.
The probability density function of the random variable Y is related to the noise, and to
conditional probability of the transmitted symbols. The following hypotheses are relevant:
H
0
is the hypothesis that a 0 was transmitted a
k
= 0, Y = n
H
1
is the hypothesis that a 1 was transmitted a
k
= A, Y = A +n.
The probability density function of the random variable Y conditioned on the event H
0
is
given by
p
Y
(y/H
0
) = p
N
(y) (7)
where p
N
(y) is the Gaussian probability density function.
For hypothesis H
1
,
p
Y
(y/H
1
) = p
N
(y A) (8)
The probability density function in this case is shifted to the value n = y A. Thus, the
probability density function for the noisy signal is the probability density function for the
noise-free discrete signal 0 or A (unipolar format) added to the probability density function of
the noise p
N
(n). Figure A.5 shows the reception of a given digital signal performed using hard
decision.
The probability density function for each signal is the Gaussian probability density function
centred at the value of the amplitude that is transmitted.
Figure A.6shows the shadowedareas under eachprobabilitydensityfunctionthat correspond
to the probability of error associated with each hypothesis. Thus, the receiver assumes that
if Y < U, hypothesis H
0
has occurred, and if Y > U, hypothesis H
1
has occurred. Error
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
Appendix A: Error Probability in the Transmission of Digital Signals 331
y(t
k
) A
x
d
(t) A
T
b
/ 2
U
0 1 1 0 0
y(t )
A
0
0
0
t
k
T
b
t
k
T
b
+ T
b
/ 2
Figure A.5 Reception of a digital signal
P
e1
P
e0
U 0

Y
A
3 2 1 0 1 2 3 4 5
p
y
(y/H
0
) p
y
(y/H
1
)
Figure A.6 Bit error rate calculation
probabilities associated with each hypothesis are described in Figure A.6, and are equal to
P
e0
= P(Y > U/H
0
) =
_

U
p
Y
(y/H
0
) dy (9)
P
e1
= P(Y < U/H
1
) =
_
U

p
Y
(y/H
1
) dy (10)
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
332 Essentials of Error-Control Coding
The threshold value U should be conveniently determined. A threshold value U close to the
amplitude 0 reduces the error probability associated with the symbol 1, but strongly increases
the error probability associate with the symbol 0, and vice versa. The error probability of
the whole transmission is an average over these two error probabilities, and its calculation can
lead to a proper determination of the value of the threshold U:
P
e
= P
0
P
e0
+ P
1
P
e1
(11)
where P
0
= P(H
0
), P
1
= P(H
1
).
P
0
and P
1
are the source symbol probabilities; that is, the probabilities of the transmission
of a symbol 0 and 1. The average error probability is precisely the mean value of the errors
in the transmission that takes into account the probability of occurrence of each symbol.
The derivative with respect to the threshold U of the average error probability is set to be
equal to zero, to determine the optimal value of the threshold:
dP
e
/dU = 0 (12)
This operation leads to the following expression:
P
0
p
Y
(U
opt
/H
0
) = P
1
p
Y
(U
opt
/H
1
) (13)
If the symbols 0 and 1 of the transmission are equally likely
P
0
= P
1
=
1
2
(14)
then
P
e
=
1
2
(P
e0
+ P
e1
) (15)
and the optimal value of the threshold is then
p
Y
(U
opt
/H
0
) = p
Y
(U
opt
/H
1
) (16)
As seems reasonable, the optimal value of the threshold U is set to be in the middle of the two
amplitudes, U
opt
= A/2, if the symbol source probabilities are equal; that is, if symbols are
equally likely (see Figure A.6).
The Gaussian probability density function with zero mean value and variance
2
charac-
terizes the error probability of the involved symbols if they are transmitted over the AWGN
channel. This function is of the form
p
N
(y) =
1

2
2
e

y
2
2
2
(17)
In general, this probability density function is shifted to a mean value m and has a variance

2
, such that
p
N
(y) =
1

2
2
e

(ym)
2
2
2
(18)
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
Appendix A: Error Probability in the Transmission of Digital Signals 333
2 1 0 1 2 3 4 5 6
p
N
(y)
Q(k)
m + k m
Figure A.7 Normalized Gaussian probability density function Q(k)
The probability that a given value of the random variable Y is larger than a value m +k is
a function of the number k, and it is given by
P(Y > m +k) =
1

2
2
_

m+k
e

(ym)
2
2
2
dy (19)
These calculations are simplied by using the normalized Gaussian probability density func-
tion, also known as the function Q(k) (Figure A.7):
Q(k) =
1

2
_

k
e

()
2
2
d (20)
obtained by putting
=
y m

(21)
This normalized function can be used to calculate the error probabilities of the digital trans-
mission described in equations (9) and (10).
P
e0
=
_

U
p
N
(y) dy =
1

2
2
_

U
e

y
2
2
2
dy = Q(U/) (22)
and
P
e1
=
_
U

p
N
(y A) dy =
1

2
2
_
U

(yA)
2
2
2
dy = Q((A U) / ) (23)
If U = U
opt
, the curves intersect in the middle point U
opt
= A/2.
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
334 Essentials of Error-Control Coding
In this case these error probabilities are equal to
P
e0
= P
e1
= Q
_
A
2
_
(24)
P
e
=
1
2
(P
e0
+ P
e1
) = Q
_
A
2
_
This is the minimum value of the average error probability for the transmission of two equally
likely symbols over the AWGN channel. As seen in the above expressions, the term A/2 (or
equivalently its squared value) denes the magnitude of the number of errors in the transmis-
sion, that is, the error probability or bit error rate of the transmission.
The result is the same for transmission using the polar format (a
k
= A/2), if the symbol
amplitudes remain the same distance A apart.
The above expressions for the error probability can be generalized for the transmission of
M symbols taken from a discrete source, and they can also be described in terms of the signal-
to-noise ratio. The power associated with the transmission of the signal described in equation
(1) is useful for this latter purpose. Let us take a sufciently long time interval T
0
, such that
T
0
= NT, and N 1. The amplitude-modulated pulse signal uses the normalized pulse
p(t ) =
_
1 |t | < /2
0 ||t | > /2
(25)
where T. Then the power associated with this signal is equal to
S
R
=
1
T
0
_
T
0
/2
T
0
/2
_

k
a
k
p(t kT)
2
_
dt =
1
T
0
_
T
0
/2
T
0
/2

k
a
2
k
p
2
(t kT) dt
=
1
T
0
_
T
0
/2
T
0
/2
k=N/2

k=N/2
a
2
k
p
2
(t kT) dt
S
R
=

k
1
NT
_
T/2
T/2
a
2
k
p
2
(t ) dt =
N
0
NT
_
T/2
T/2
a
2
0
p
2
(t ) dt +
N
1
NT
_
T/2
T/2
a
2
1
p
2
(t ) dt
S
R
= P
0
1
T
_
/2
/2
a
2
0
p
2
(t ) dt + P
1
1
T
_
/2
/2
a
2
1
p
2
(t ) dt (26)
The duration of the pulse can be equal to the whole time interval T = = T
b
, in this case it
is said that the format is non-return-to-zero (NRZ), or it can be shorter than the whole time
interval < T
b
, then the format is said to be return-to-zero (RZ). For the NRZ format,
S
R
= A
2
/2 unipolar NRZ
S
R
= A
2
/4 polar NRZ
A =
_

2S
R
unipolar

4S
R
polar
(27)
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
Appendix A: Error Probability in the Transmission of Digital Signals 335
A
0 0
t
s
1
(t) s
0
(t)
A
t
T
b
T
b
Figure A.8 Signalling in the NRZ unipolar format
If
2
is the noise power N
R
at the output of the receiver lter, then
_
A
2
_
2
=
A
2
4N
R
=
_
(1/2)(S/N)
R
unipolar
(S/N)
R
polar
(28)
Thus, unipolar format needs twice the signal-to-noise ratio to have the same BER performance
as that of the polar format.
The error probability was determined as a function of the parameter A/2. However, a
more convenient way of describing this performance is by means of the so-called average bit
energy-to-noise power spectral density ratio E
b
/N
0
. This newparameter requires the following
denitions:
E
b
=
S
R
r
b
average bit energy (29)
E
b
N
0
=
S
R
N
0
r
b
average bit energy-to-noise power spectral density ratio (30)
The average bit energy of a sequence of symbols such as those described by the digital signal
(1) is calculated as
E
b
= E
_
a
2
k
_

p
2
(t k D) dt
_
= E
_
a
2
k
_

p
2
(t ) dt
_
= a
2
k
_

p
2
(t ) dt (31)
The above parameters are calculated for the unipolar NRZformat. In this format a 1 is usually
transmitted as a rectangular pulse of amplitude A, and a 0 is transmitted with zero amplitude
as in Figure A.8.
The average bit energy E
b
is equal to
E
1
=
_
T
b
0
s
2
1
(t ) dt = A
2
T
b
E
0
=
_
T
b
0
s
2
0
(t ) dt = 0
E
b
= P
0
E
0
+ P
1
E
1
=
1
2
(E
0
+ E
1
) =
A
2
T
b
2
(32)
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
336 Essentials of Error-Control Coding
A/ 2
A/ 2

t
T
b
T
b

t
s
1
(t ) s
0
(t )
0
0
Figure A.9 Signalling in the NRZ polar format
Since the transmission is over the AWGN channel of bandwidth B and for the maximum
possible value of the symbol or bit rate r
b
= 2B, [14], the input noise is equal to
N
R
=
2
= N
0
B =
N
0
r
b
2
(33)
This is the minimum amount of noise that inputs the receiver if a matched lter is used [14].
The quotient (A/2)
2
can be now expressed as
_
A
2
_
2
=
A
2
4
2
=
2E
b
r
b
4N
0
r
b
/2
=
E
b
N
0
(34)
In the case of the NRZ polar format, where a 1 is usually transmitted as a rectangular pulse of
amplitude A/2 and a 0 is transmitted as a rectangular pulse of amplitude A/2 (Figure A.9).
Then the average bit energy E
b
is
E
1
=
_
T
b
0
s
2
1
(t ) dt =
A
2
T
b
4
E
0
=
_
T
b
0
s
2
0
(t ) dt =
A
2
T
b
4
E
b
= P
0
E
0
+ P
1
E
1
=
1
2
(E
0
+ E
1
) =
A
2
T
b
4
(35)
and so
_
A
2
_
2
=
A
2
4
2
=
4E
b
r
b
4N
0
r
b
/2
=
2E
b
N
0
(36)
It is again seen that the polar format has twice the value of (A/2)
2
for a given value of
E
b
/N
0
with respect to the unipolar format:
_
A
2
_
2
=
_
E
b
N
0
unipolar
2E
b
N
0
polar
(37)
OTE/SPH OTE/SPH
JWBK102-APPA JWBK102-Farrell June 17, 2006 18:5 Char Count= 0
Appendix A: Error Probability in the Transmission of Digital Signals 337
Now expressing the error probabilities of the two formats in terms of the parameter E
b
/N
0
,
we obtain
P
e
=

Q
__
E
b
N
0
_
unipolar
Q
__
2E
b
N
0
_
polar
(38)
This is the minimum value of the error probability and is given when the receiver uses the
matched lter. Any other lter will result in a higher bit error rate than that expressed in (38).
The matched lter is optimumin terms of maximizing the signal-to-noise ratio for the reception
of a given pulse shape, over a given channel transfer function, and affected by a given noise
probability density function.
Bibliography
[1] Carlson, A. B., Communication Systems: An Introduction to Signals and Noise in Electrical
Communication, 3rd Edition, McGraw-Hill, New York, 1986.
[2] Sklar, B., Digital Communications: Fundamentals and Applications, Prentice Hall,
Englewood Cliffs, New Jersey, 1988.
[3] Couch, L. W., Digital and Analog Communications Systems, MacMillan, New York, 1996.
[4] Proakis, J. G. and Salehi, M., Communication Systems Engineering, Prentice Hall,
Englewood Cliffs, New Jersey, 1994.

You might also like