0% found this document useful (0 votes)
46 views

Assignment 1 (Digital)

The document discusses source encoding, channel encoding, modulation, and demodulation in digital communication systems. It also covers concepts of mutual information, entropy, cyclic codes, and channel capacity. The key points are: 1) Source encoding compresses data using few bits to represent signals with little redundancy. Channel encoding adds controlled redundancy to counter noise during transmission. Modulation and demodulation convert between digital and analog signals for transmission. 2) Mutual information quantifies the relationship between two variables. Entropy measures the unpredictability of a random variable. Cyclic codes use a generator matrix of an identity and parity matrix. 3) Channel capacity is directly proportional to signal-to-noise ratio and bandwidth, allowing infinite data

Uploaded by

Lincoln Mutanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Assignment 1 (Digital)

The document discusses source encoding, channel encoding, modulation, and demodulation in digital communication systems. It also covers concepts of mutual information, entropy, cyclic codes, and channel capacity. The key points are: 1) Source encoding compresses data using few bits to represent signals with little redundancy. Channel encoding adds controlled redundancy to counter noise during transmission. Modulation and demodulation convert between digital and analog signals for transmission. 2) Mutual information quantifies the relationship between two variables. Entropy measures the unpredictability of a random variable. Cyclic codes use a generator matrix of an identity and parity matrix. 3) Channel capacity is directly proportional to signal-to-noise ratio and bandwidth, allowing infinite data

Uploaded by

Lincoln Mutanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

1a.

The process of efficiently converting the output of a digital or analog source into a sequence of
binary digits (bits) with little or no redundancy is source encoding. It is also known as data compression
(use of very few bits to represent a signal).

b. The purpose of:- Channel encoder:- to introduce, in a controlled manner, some redundancy in binary
information sequence in the receiver so as to counter interference and noise acquired from the channel
during transmission.

Channel decoder:- reconstructs the original information sequence from the code used by the encoder
and redundancy that is in the received information. Reliability of the received signal is therefore
increased.

c. The purpose of:- Digital modulator:- to convert the binary sequence into electric signals (analog) for
transmission over the channel.

Digital demodulator:- to demodulate as well as convert the analog signal transmitted into a digital signal.
The signal is thus reconstructed.

d.

2. Mutual information can be thought of as the amount of information a variable X has about variable Y.
It is denoted by I(X:Y) which is equal to the amount of information in X plus the amount of information in
Y minus the information in X and Y taken together. Mathematically expressed as

I(X:Y) = I(X) + I(Y) – I(XY)

Mutual information is symmetric and is always positive.

The entropy of a random variable is a function which attempts to characterize the unpredictability of a
random variable(self-information). The relationship between entropy and mutual information is given by

I(X:Y) = H(Y) – H(Y/X)

Proof: I(X:Y) = H(Y) – H(Y/X)

H(X,Y) = H(X) + H(Y/X)→I(X:Y) = H(X) + (H(Y) – H(XY)

I(X:X) = H(X) – H(X/X) →Entropy is “self-information”.

Proof that H(X/Y) = 0


m m
H(X/Y) is given by -∑ 1 ∑ P (yi;xj) log2P (xj/yi)
i=1 j=1

m m
xj
= -∑ P( y i) ∑ log2 P( )
i=1 j=1 yi
m
= -Σ P(yi) Log2 1 = 0.
I=1

3. A generator matrix, G of a cyclic code is composed of an identity matrix, I and a parity matrix,
P. If cyclic code is (n,k), then the generator matrix is having n rows and k columns. In our case
1 0 0 0
we have n = 7 columns and k = 4 rows. The identity matrix is known to us and it is
( 0 1 0 0
0 0 1 0
0 0 0 1
. So, row of parity matrix based on generator polynomial G(x) is given by: Remainder(xn-k/g(x)).
)
→n = 7 (total bits) and k = 4 (message bits)
Given the generator polynomial as g(x) = 1 + x + x3 we construct the generator matrix as
follows:

Since the identity matrix of a 4x4 is known, what remains is to get the parity matrix to complete
the 4-row x 7-column generator matrix.
1st row of parity matrix = remainder [x7-1/g(x)]→x6/(1+x+x3) = 1 + x2
2nd row of parity matrix = remainder [x7-2/g(x)]→x5/(1+x+x3) = 1 + x + x2
3rd row of parity matrix = remainder [x7-3/g(x)]→x4/(1+x+x3) = 1 + x2
4th row of parity matric = remainder [x7-4/g(x)]→x3//(1+x+x3) = 1 + x

Writing the parity polynomials in terms of bits yields:


1st row→101 since there is no value of x
2nd row→111
3rd row→101
4th row→011
1 0 0 0 1 0 1
The final generator matrix will be
[0 1 0 0 1 1
0 0 1 0 1 0
0 0 0 1 0 1
1
1
1
]
4. Channel capacity equation is given by, C = B log2( 1+ S/ N )
where, C = capacity
B = bandwidth
S = signal power
N = noise power
If we have N = ƞB where ƞ/2 is power density, then C = B log2(1 + S/ƞB)
= ƞB/S x S/ƞ log2(1 + S/ƞB)
= S/ƞ[ƞB/S log2(1 + S/ƞB)]
= S/ƞ limB→∞[log2{1 + S/ƞB)/S/ƞB]→log2e
Bandwidth tends to infinity
= S/ƞ log2e
= 1.44S/ƞ.
As signal-to-noise ratio tends to infinity or very large, channel capacity also becomes infinity or
very large. There is direct proportionality. Large signal-to-noise ratio and a finite bandwidth
bring about infinite data rates but more noise is also introduced.
6.

Symbol Probability
S0 0.25 0.25 0.25 0.25 0.5 0.5
S1 0.25 0.25 0.25 0.25 0.25 0.5
S2 0.125 0.125 0.25 0.25 0.25
S3 0.125 0.125 0.125 0.25
S4 0.125 0.125 0.125
S5 0.0625 0.125
S6 0.0625

The Huffman code for the above is:

Symbol Probability Code word Code length

S0 0.25 10 2

S1 0.25 11 2

S2 0.125 001 3

S3 0.125 010 3

S4 0.125 011 3

S5 0.0625 0000 4

S6 0.0625 0001 4
k
Average code length, L = ∑ PkLk where Pk = probabilities and Lk = codelength
k =1

L = P1L1+ P2L2+ P3L3+ P4L4+ P5L5+ P6L6+ P7L7

= 0.25x2 + 0.25x2 +*( 0.125x3)3 + (0.0625x4)2

=2.625 bits/symbol

k
Entropy, H(X) = -(∑ Pklog 2 Pk ) → No effect of negative sign
k =1

= P 1 log 2L1+ P2 log 2L2+ P3 log 2L3+ P4 log 2L4+ P5 log 2L5+ P6 log 2L6+ P7 log 2L7
= (0.25log 20.25)2 + (0.125log20.125)3 + (0.0625log20.0625)2

= 2.625 bits/symbol

Efficiency, ƞ = H/L x 100%

= 2.625/2.625 x 100 %

= 100 %

You might also like