0% found this document useful (0 votes)
62 views

Coding 7

1. BCH codes are multiple error correction codes that are a generalization of Hamming codes. The block length of a BCH code is 2m-1, where m is the number of bits in the syndrome. The codewords are formed by taking the remainder after dividing a polynomial representing the information bits by a generator polynomial. 2. Analog signals vary continuously between minimum and maximum values, while digital signals assume discrete values, usually 0 and 1. Digital signals can be transmitted and processed without error, while errors accumulate in analog signals. Converters are used to change between analog and digital formats. 3. Mutual information measures how much one random variable communicates, on average, about another random variable when

Uploaded by

Emperor'l Bill
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Coding 7

1. BCH codes are multiple error correction codes that are a generalization of Hamming codes. The block length of a BCH code is 2m-1, where m is the number of bits in the syndrome. The codewords are formed by taking the remainder after dividing a polynomial representing the information bits by a generator polynomial. 2. Analog signals vary continuously between minimum and maximum values, while digital signals assume discrete values, usually 0 and 1. Digital signals can be transmitted and processed without error, while errors accumulate in analog signals. Converters are used to change between analog and digital formats. 3. Mutual information measures how much one random variable communicates, on average, about another random variable when

Uploaded by

Emperor'l Bill
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

3.3.

BCH codes
The BCH stands for the discoverers, Bose and Chaudhuri, and independently, Hocquenghem.
Hocqueng
These codes are multiple error correction codes and a generalization of the Hamming codeds. The
possible BCH codes for 3 and 2
Let the length of the code word to be sent and the number of bits of the syndrome. Then:
• The block length is: 2 1
• Parity check bits:
• Minimum distance: 2 1
The codewords are formed by taking the remainder after dividing a polynomial representing the
information bits by a generator polynomial. The generator polynomial is selected to give the codes
its characteristics. All codewords are multiples of the generator polynomial.

4. Analog signals
Very often, interfacing numeric
ric systems with analog equipments is necessary. A numeric signal
possesses only 0 or 1 as values while analog signals vary
continuously between minimum and maximum values of electric
voltage.
In real life, a digital signal takes time to change from one value
va
to another, and assumes intermediate values during transition.
Steady values are slightly inaccurate. Voltages within certain
ranges are guaranteed to be interpreted as certain permitted values
(green regions).
Because voltages within a certain range are interpreted as a single value, a digital signal can
contain a certain amount of error (e.g. distortion, noise, interference) and still be read correctly. As
long as we regenerate the signal (read it and reconstruct it with electronic circuits) before the error
becomes too large, we can remove the error. In an analog signal, we cannot tell whether the value is
affected by error, because all values are permitted. Digital signals can be copied, transmitted and
processed without error. Analog signals can
cannot.
An analog signal assumes a continuous range of values:

Information
ormation theory and elements of digital communication (Dr. J.P. DADA,, University of Bamenda) Page - 19 -
3. Digital or numeric signals
A digital signal assumes discrete (isolated, separate) values. Usually there are two permitted values:

Generally, people use analog-to


to-digital and digital-to-analog
analog converters to move from a form to
another.
Example: During transmission, information can move from digital (Computer) to analog (antenna)
and vice-versa,
versa, using appropriate converters.

5. Advantages of digitization
As the signals are digitized, there
here are many advantages of digital communication over analog
communication, such as -
• The effect of distortion, noise, and interference is much less in digital signals as they are less
affected.
• Digital circuits are more reliable
• Digital circuits are easy to design and cheaper than analog circuits.
• The hardware implementation in digital circuits, is more flexible than analog.
• The occurrence of cross-talk
talk is very rare in digital communication.
• The signal is un-altered
altered as the pulse needs a high distur
disturbance
bance to alter its properties, which is
very difficult.
• Signal
ignal processing functions such as encryption and compression are employed in digital circuits
to maintain the secrecy of the information.
• The probability of error occurrence is reduced by employin
employingg error detecting and error correcting
codes.
• Spread spectrum technique is used to avoid signal jamming.
• Combining digital signals using Time Division Multiplexing (TDM) is easier than combining
analog signals using Frequency Division Multiplexing (FDM).
• The configuring process of digital signals is easier than analog signals.

Information
ormation theory and elements of digital communication (Dr. J.P. DADA,, University of Bamenda) Page - 20 -
• Digital signals can be saved and retrieved more conveniently than analog signals.
• Many of the digital circuits have almost common encoding techniques and hence similar
devices can be used for a number of purposes.
• The capacity of the channel is effectively utilized by digital signals

6. Notion of entropy
The entropy of a random variable is a function which attempts to characterize the \unpredictability" of
a random variable. Consider a random variable X representing the number that comes up on a roulette
wheel and a random variable Y representing the number that comes up on a fair 6-sided die. The
entropy of X is greater than the entropy of Y . In addition to the numbers 1 through 6, the values on the
roulette wheel can take on the values 7 through 36. In some sense, it is less predictable.
But entropy is not just about the number of possible outcomes. It is also about their frequency.
For example, let Z be the outcome of a weighted six-sided die that comes up 90% of the time as a \2".
Z has lower entropy than Y representing a fair 6-sided die. The weighted die is less unpredictable, in
some sense. Entropy is not a vague concept. It has a precise mathematical definition.
Information theory provides a theoretical foundation to quantify the information content, or the
uncertainty, of a random variable represented as a distribution. Formally, let X be a discrete random
variable with alphabet X and probability mass function p(x), ∈ . The Shannon entropy of X is
defined as:
∑ ∈ log
where ∈ [0,1] and −%&' ( ) represents the information associated with a single occurrence of
x. The unit of information is called a bit. Normally, the logarithm is taken in base 2 or e. As a measure
of the average uncertainty in X, the entropy is always nonnegative, and indicates the number of bits on
average required to describe the random variable. The higher the entropy, the more information the
variable contains.

Example: Calculate the entropy of a randomly selected letter in an English document, assuming its
probability is given as:
Letter
A b c d e f g h i j k
x
P(x) 0.0575 0.0128 0.0263 0.0285 0.0913 0.0173 0.0133 0.0313 0.0599 0.006 0.0084

Letter
l m n o p q r s t u v
x
P(x) 0.0335 0.0235 0.0596 0.0689 0.0192 0.008 0.0508 0.0567 0.0706 0.0334 0.0069

Information theory and elements of digital communication (Dr. J.P. DADA, University of Bamenda) Page - 21 -
Letter
w x y z -
x
P(x) 0.0119 0.0073 0.0164 0.0007 0.1928

Exercise 2: A random variable ()0,1,2* is created by flipping a fair coin to determine whether =
0; then, if is not 0, flipping a fair coin a second time to determine whether is 1 or 2. The
probability distribution of is:

+( = 0) = ; +( = 1) = and +( = 2) =
, - -

Calculate the entropy X

Solution: H(X)= − ∑ ∈ ( )log ( ) = − log . / − log . / − log . / = 0. 2


, , - - - -

7. The notion of join entropy


Joint entropy is the entropy of a joint probability distribution, or a multi-valued random
variable. For example, one might wish to the know the joint entropy of a distribution of people defined
by hair color C and eye color E, where C can take on 4 different values from a set C and E can take on
3 values from a set E. If P(E;C) defines the joint probability distribution of hair color and eye color,
then we write that their joint entropy is:
(3, 4) ≡ 6+(3, 4)7 = − ∑=;> ∑:;< +(8, 9)log +(8, 9)
Example:. Let X represent whether it is sunny or rainy in a particular town on a given day. Let Y
represent whether it is above 70 degrees or below seventy degrees. Compute the entropy of the joint
distribution P(X; Y ) given by:
+(?& @, ℎ& ) = ,

+(?& @, 9&&%) = -

+(BCD @, ℎ& ) = -

+(BCD @, 9&&% ) = 0
Calculate the entropy X

Solution: H(X)= − ∑ ∈ ( )log ( ) = − log . / − log . / − log . / + E = 0. 2bit


, , - - - -

8. Notion of mutual Information

Information theory and elements of digital communication (Dr. J.P. DADA, University of Bamenda) Page - 22 -
Mutual information is a quantity that measures a relationship between two random variables
that are sampled simultaneously. In particular, it measures how much information is communicated, on
average, in one random variable about another. Intuitively, one might ask, how much does one random
variable tell me about another?
For example, suppose X represents the roll of a fair 6-sided die, and Y represents whether the
roll is even (0 if even, 1 if odd). Clearly, the value of Y tells us something about the value of X and
vice versa. That is, these variables share mutual information.
On the other hand, if X represents the roll of one fair die, and Z represents the roll of another
fair die, then X and Z share no mutual information. The roll of one die does not contain any
information about the outcome of the other die. An important theorem from information theory says
that the mutual information between two variables is 0 if and only if the two variables are statistically
independent. The formal definition of the mutual information of two random variables X and Y , whose
joint distribution is defined by P(X; Y ) is given by
K( ,I)
F( ; H) = − ∑ ; ∑I;J +( , @)log .
K( )K(I)
/

In this definition, P(X) and P(Y ) are the marginal distributions of X and Y obtained through the
marginalization process described in the Probability Review document.

Information theory and elements of digital communication (Dr. J.P. DADA, University of Bamenda) Page - 23 -
CHAPTER THREE:
THREE: ELEMENTS OF DIGITAL COMMUNICATION

1. Introduction
The elements which form a digital communication system is represented by the following block
diagram for the ease of understanding

2. Source
The source can be an analog signal. Example: A Sound signal

2. Input transducer
This is a transducer which takes a physical input and converts it to an electrical signal
(Example: microphone). This block also consists of an analog to digital converter where a digital
signal is needed for further processes. A digital signal is generally represented by a binary sequence.

3. Source encoder
The source encoder compresses the data into minimum number of bits. This process helps in effective
utilization of the bandwidth. It removes the redundant bits (unnecessary excess bits, i.e., zeroes).

4. Channel encoder
The channel encoder, does the coding for error correction. During the transmission of the signal, due to
the noise in the channel, the signal may get altered and hence to avoid this, the channel encoder adds
some redundant bits to the transmitted data. These are the error correcting bits.

Information theory and elements of digital communication (Dr. J.P. DADA, University of Bamenda) Page - 24 -
5. Digital modulation
The signal to be transmitted is modulated here by a carrier. The signal is also converted to analog from
the digital sequence, in order to make it travel through the channel or medium.

6. Channel
The channel or a medium, allows the analog signal to transmit from the transmitter end to the receiver
end.

7. Digital demodulator
This is the first step at the receiver end. The received signal is demodulated as well as converted again
from analog to digital. The signal gets reconstructed here.

8. Channel decoder
The channel decoder, after detecting the sequence, does some error corrections. The distortions which
might occur during the transmission, are corrected by adding some redundant bits. This addition of bits
helps in the complete recovery of the original signal.

9. Source decoder
The resultant signal is once again digitized by sampling and quantizing so that the pure digital output is
obtained without the loss of information. The source decoder recreates the source output.

10. Output transducer


This is the last block which converts the signal into the original physical form, which was at the input
of the transmitter. It converts the electrical signal into physical output (Example: loud speaker).

11. Output signal


This is the output which is produced after the whole process. Example: The sound signal received.
This unit has dealt with the introduction, the digitization of signals, the advantages and the elements of
digital communications. In the coming chapters, we will learn about the concepts of Digital
communications, in detail.

Information theory and elements of digital communication (Dr. J.P. DADA, University of Bamenda) Page - 25 -

You might also like