0% found this document useful (0 votes)
9 views

Bit Error Rate

The document discusses the concept of Bit Error Rate (BER), which is the ratio of bits received in error to the total bits transferred, and how it can be estimated using the probability of bit errors due to noise. It explains the effects of Intersymbol Interference (ISI) on BER, the importance of choosing an optimal digitization threshold to minimize errors, and the impact of Gaussian noise on signal reception. Additionally, it highlights the relationship between signal-to-noise ratio (SNR) and BER, emphasizing that increasing ISI leads to a higher BER.

Uploaded by

Mahdi Bahramian
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Bit Error Rate

The document discusses the concept of Bit Error Rate (BER), which is the ratio of bits received in error to the total bits transferred, and how it can be estimated using the probability of bit errors due to noise. It explains the effects of Intersymbol Interference (ISI) on BER, the importance of choosing an optimal digitization threshold to minimize errors, and the impact of Gaussian noise on signal reception. Additionally, it highlights the relationship between signal-to-noise ratio (SNR) and BER, emphasizing that increasing ISI leads to a higher BER.

Uploaded by

Mahdi Bahramian
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Bit Error Rate

The bit error rate (BER), or perhaps more appropriately the bit
error ratio, is the number of bits received in error divided by the
total number of bits transferred. We can estimate the BER by
calculating the probability that a bit will be incorrectly received
due to noise.

Using our normal signaling strategy (0V for “0”, 1V for “1”), on
a noise-free channel with no ISI, the samples at the receiver
are either 0V or 1V. Assuming that 0’s and 1’s are equally
6.02 Spring 2011 probable in the transmit stream, the number of 0V samples is
Lecture #7 approximately the same as the number of 1V samples. So the
mean and power of the noise-free received signal are
• ISI and BER 1 N 1 N 1
• Choosing Vth to minimize BER
µ ynf = ! ynf [n] = N 2 = 2
N n=1
2 2
1 N# 1& 1 N #1& 1 N 1
P!ynf = !% ynf [n]" ( = !% ( = =
N n=1 $ 2 ' N n=1 $ 2 ' N 4 4
6.02 Spring 2011 Lecture 7, Slide #1 6.02 Spring 2011 Lecture 7, Slide #2

p(bit error) BER (no ISI) vs. SNR


Now assume the channel has Gaussian noise with μ=0 and
variance σ2. And we’ll assume a digitization threshold of 0.5V. We calculated the power of the
We can calculate the probability that noise[k] is large enough noise-free signal to be 0.25 and
that y[k] = ynf[k] + noise[k] is received incorrectly: the power of the Gaussian noise
is its variance, so
σ 1-Φμ,σ(0.5) = Φμ,σ(-0.5)
p(error | transmitted “0”): = Φ((-0.5-0)/σ) ! P! $ ! 0.25 $
= Φ(-0.5/σ) SNR (db) = 10 log ## signal && = 10 log # 2 &
!
P
" noise % "! %
0 0.5
Plots of noise-free voltage
Φμ,σ(0.5) + Gaussian noise Given an SNR, we can use the
= Φ((0.5-1)/σ) σ formula above to compute σ2
p(error | transmitted “1”): = Φ(-0.5/σ) and then plug that into the
formula on the previous slide
0.5 1 to compute p(bit error) = BER.
p(bit error) = p(transmit “0”)*p(error | transmitted “0”) +
The BER result is plotted to the
p(transmit “1”)*p(error | transmitted “1”)
right for various SNR values.
= 0.5*Φ(-0.5/σ) + 0.5*Φ(-0.5/σ)
6.02 Spring 2011
= Φ(-0.5/σ) Lecture 7, Slide #3 6.02 Spring 2011 Lecture 7, Slide #4
Intersymbol Interference and BER Test Sequence to Generate Eye Diagram
Consider transmitting a digital signal at 3 samples/bit over a
channel whose h[n] is shown on the left below. If we want to explore every possible transition over the channel,
we’ll need to consider transitions that start at each of the four
y[5]=0.0V voltages from the previous slide, followed by the transmission of
a “0” and a “1”, i.e., all patterns of 3 bits.
y[5]=0.3V

y[5]=0.7V

y[5]=1.0V

The figure on the right shows that at end of transmitting each


bit, the voltage y[n] corresponding to the last sample in the bit
will have one of 4 values and depends only on the current bit
and previous bit.
6.02 Spring 2011 Lecture 7, Slide #5 6.02 Spring 2011 Lecture 7, Slide #6

The Eight Cases Plot the Eye Diagram


To make an eye diagram, 111
overlay the eight plots in a
single diagram. 110 y[n]=0.2*0 + 0.2*1 + 0.3*1 + 0.3*1
011 = 0.8V
We can label the plot with
the bit sequence that 010
generated each line.

The widest part of the eye


comes at the first sample in
101
each bit.

Using the convolution sum 100


we can compute the width 001 y[n]=0.2*1 + 0.2*0 + 0.3*0 + 0.3*0
= 0.2V
The first two bits determine the starting voltage, the third bit is of the eye = 0.8-0.2 = 0.6V
the test bit. The plots show the response to the test bit. All bits 000
transmitted at 3 samples/bit.
6.02 Spring 2011 Lecture 7, Slide #7 6.02 Spring 2011 Lecture 7, Slide #8
BER and ISI p(bit error) with ISI
From the diagram on the previous slide, if we sample at the
widest point in the eye, the noise-free signal will produce one p(error | 11) = Φ((0.5-1.0)/σ)
of four possible samples: = Φ(-0.5/σ)

1. 1.0V if last two bits are “11”


2. 0.8V if last two bits are “10” p(error | 10) = Φ((0.5-0.8)/σ)
3. 0.2V if last two bits are “01” = Φ(-0.3/σ)
4. 0.0V if last two bits are “00”

Since all the sequences are equally likely, the probability of


observing a particular voltage is 0.25. p(error | 01) = 1-Φ((0.5-0.2)/σ)
= Φ(-0.3/σ)
Let’s repeat the calculation of p(bit error), this time on a
channel with ISI, assuming Gaussian noise with a variance of
σ2 (from now on we’ll assume that Gaussian noise has a mean p(error | 00) = Φ((0.5-1)/σ)
of 0). Again, we’ll use a digitization threshold of 0.5V. = Φ(-0.5/σ)

6.02 Spring 2011 Lecture 7, Slide #9 6.02 Spring 2011 Lecture 7, Slide #10

p(bit error) with ISI cont’d. Choosing Vth


We’ve been using 0.5V as the digitization threshold – it’s the
p(bit error) = p(11)*p(error | 11) + p(10)*p(error | 10) + voltage half-way between the two signaling voltages of 0V and
p(01)*p(error | 01) + p(00)*p(error | 00) 1V. Assuming that the probability of transmitting 0’s and 1’s is
the same, this choice minimizes the BER. Let’s see why…
= 0.25*Φ(-0.5/σ) + 0.25*Φ(-0.3/σ) +
0.25*Φ(-0.3/σ) + 0.25*Φ(-0.5/σ) Suppose noise has a triangular distribution from -0.6V to 0.6V:

= 0.5*Φ(-0.5/σ) + 0.5*Φ(-0.3/σ)

Suppose σ=0.25. Compare the formula above to the formula on


slide #3 to determine what ISI has cost us in terms of BER:
PDF of received signal
0.417
p(bit error, no ISI) = Φ(-0.5/0.25) = Φ(-2) = 0.023
PDF of received 0’s PDF of received 1’s

p(bit error, with ISI) = 0.5*Φ(-2) + 0.5*Φ(-1.2) = 0.069

Bottom line: a factor of 3 increase in BER -0.6 0.4 0.6 1.6


0 0.5 1

6.02 Spring 2011 Lecture 7, Slide #11 6.02 Spring 2011 Lecture 7, Slide #12
Minimizing BER Minimizing BER when p(0)!p(1)
Shaded area = p(bit error) with Vth = 0.5V
Suppose p(1) = 2/3 and p(0) = 1/3:

0.417 0.556

0.278

-0.6 0.4 0.6 1.6


0 0.5 1
-0.6 0.4 0.6 1.6
0 0.5 1
Now move Vth slightly. What happens to BER?
If we leave Vth at 0.5V, we can see that p(bit error) will be larger
than if we moved the threshold to a lower voltage. p(bit error)
0.417
will be minimized when threshold is set at intersection of the two
PDFs.
increase in p(bit error)
Question: with triangular noise PDF, can you devise a signaling
-0.6 0.4 0.6 1.6 protocol that has p(bit error) = 0?
0 0.5-Δ 1
6.02 Spring 2011 Lecture 7, Slide #13 6.02 Spring 2011 Lecture 7, Slide #14

Channel Model Summary Summary


• Noise-free channels modeled as LTI systems
x[n] hchan[n] + y[n] • LTI systems are completely characterized by their unit
sample response h[n]
• Series LTI: h1[n]䌫h2[n], parallel LTI: h1[n]+h2[n]
Typically: Gaussian • Use convolution sum to compute y[n]=x[n]䌫h[n]
with variance σ2, μ=0
• Intersymbol interference when number of samples per bit is
smaller than number of non-zero elements in h[n]
Noise PDF
• In a noise-free context, deconvolution can recover x[n] given
The Good News: Using this model we can predict ISI and y[n] and h[n]. Potentially infinite information rate!
compute the BER given the SNR or σ. Often • With noise y[n] = ynf[n]+noise[n], noise described by Gaussian
referred to as the AWGN (additive white distribution with zero mean and a specified variance
Gaussian noise) model. • Bit Error Rate = p(bit error), depends on SNR
• BER = Φ( 0.5/ ) when no ISI
The Bad News: Unbounded noise means BER ! 0, i.e., we’ll
• BER increases quick with increasing ISI (narrower eye)
have bit errors in our received message. How
do we fix this? Our next topic! • Choose Vth to minimize BER
6.02 Spring 2011 Lecture 7, Slide #15 6.02 Spring 2011 Lecture 7, Slide #16

You might also like