0% found this document useful (0 votes)
11 views

Module 2

The document discusses source coding and encoding techniques used in digital communication systems. It describes the basic components in the transmitter and receiver sections including the information source, input/output transducers, source/channel encoders and decoders, and modulators/demodulators. It then discusses different encoding techniques like Huffman coding, Shannon-Fano coding, arithmetic coding, and Lempel-Ziv algorithm that are used for data compression and error detection. The source coding theorem establishes the relationship between average code length and entropy of the data source.

Uploaded by

Harsh Vira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Module 2

The document discusses source coding and encoding techniques used in digital communication systems. It describes the basic components in the transmitter and receiver sections including the information source, input/output transducers, source/channel encoders and decoders, and modulators/demodulators. It then discusses different encoding techniques like Huffman coding, Shannon-Fano coding, arithmetic coding, and Lempel-Ziv algorithm that are used for data compression and error detection. The source coding theorem establishes the relationship between average code length and entropy of the data source.

Uploaded by

Harsh Vira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MODULE 2

1.Transmitter Section
It include various functional elements such as Information Source, Input
Transducer, Source Encoder, Channel Encoder and Digital Modulator through
we can transmitted our digital information.
 Information source: It generate information which is to be transmitted. It
could be a Audio, Video, Image, discreate data e.g. computer generated output.
These signals are non-electrical quantities and hence can not be processed
directly in a digital communication system.
 Input Transducer: A transducer is a device which converts non-electrical
quantity into electrical signal such as microphone. So, the output of the
information source is a non-electrical quantity given to the input transducer, the
input transducer convert this non-electrical quantity into electrical quantity.
This block also consists of an analog to digital converter, it convert analog
signal (electrical signal) into digital signal needed for further processes.
 Source Encoder: The function of source encoder is used to compress the data
into minimum number of bits. This helps in effective utilization of the
bandwidth. It removes the redundant bits or unnecessary excess bits that is
zeros from the input data. The output of the source encoder is called source
code.
 Channel Encoder: Channel Encoder provide noise immunity by adding a
redundancy bit in the message data. These redundancy bits are called error-
correcting bits. When the signal is transmitted over a communication channel,
the signal may get altered due to the noise, distortion, and interference in the
channel. Hence, the channel encoder adds some redundant bits to the message
data in order to provide error-free data on the receiver side. The output of the
source encoder is called channel code.
 Digital Modulator: The digital modulator converts discrete binary data (0s and
1s) into continuous analog signals that can be transmitted over a
communication medium. This process is necessary because most
communication channels are analog in nature. Then modulates the carrier wave
with the digital data to create a modulated signal. This process involves
changing certain characteristics of the carrier wave, such as its amplitude,
frequency, or phase, based on the digital input. Common modulation
techniques include Amplitude Shift Keying (ASK), Frequency Shift Keying
(FSK), Phase Shift Keying (PSK), and Quadrature Amplitude Modulation
(QAM).
2. Communication Channel
A communication channel refers to the physical or logical media via information or
data is transmitted from a sender to a receiver. There are two main types of
communication channels: guided and unguided.
Guided Communication Channel transmission of signals occurs through a physical
medium like Twisted Pair Cable, Coaxial Cable, Optical Fiber etc.
Unguided Communication Channel also known as Wireless Channel, propagates
signals (data) through free space like Radio Waves, Microwaves, Infrared etc.
3.Receiver Section
It include various functional elements such as digital demodulator, source decoder,
channel decoder, source decoder, output transducer through which we can recover
our original information.
 Digital Demodulator: Digital demodulator is function just opposite to digital
modulator. It is a devise used to converting the received analog signal (usually
a modulated carrier) back into its original digital form. Demodulator is done by
modulation scheme used for encoding the data, such as amplitude shift keying
(ASK), frequency shift keying (FSK), phase shift keying (PSK), or quadrature
amplitude modulation (QAM). The output of digital demodulator given to
source decoder.
 Channel Decoder: After detecting the received signal, channel decoder apply
some error corrections mechanism. The signal might get distorted during the
transmission. This is corrected by adding some redundant bits to the signal.
This addition of bits helps in the complete recovery of the original signal.
 Source Decoder: The source decoder reverses the compression process applied
at the transmitter to recover the original digital data.
 Output Transducer : A output transducer is a device which converts electrical
signal into non-electrical quantity. So, the output of the information source is a
electrical quantity given to the output transducer, the output transducer convert
this electrical quantity into non-electrical quantity.
 Output Signal : Output signal is a original massage signal which is produced
after the whole process.
Encoding is the process of representing information in a specific format or code.
In the context of information theory and data compression, encoding techniques
are used to represent data in a more efficient way, typically by using fewer bits
to represent the same information
Purpose of Encoding:
1. Data Compression: Reduce the size of data to save storage space or
transmission bandwidth.
2. Error Detection and Correction: Add redundancy to detect and correct
errors in transmitted data.
3. Encryption: Transform data to make it unreadable without the
appropriate decryption key for security purposes.
4. Data Representation: Represent data in a specific format for processing
or communication.
Instantaneous Codes:
An instantaneous code is a type of variable-length code where no code word is a
prefix of another. This property makes it uniquely decodable. Huffman coding
is an example of an instantaneous code commonly used in data compression.

Construction of Instantaneous Codes:


1. Frequency Analysis: Assign shorter codes to more frequent symbols and
longer codes to less frequent symbols.
2. Greedy Approach: Build the code tree in a way that minimizes the
average code length.
3. Huffman Coding: A specific algorithm for constructing optimal prefix
codes based on the frequencies of symbols.
Source Coding Theorem:
The Source Coding Theorem can be stated as follows:
Source Coding Theorem (Shannon): In a uniquely decodable source code, the
average code length L per symbol is bounded by the entropy H(X) of the source:
H(X)≤L<H(X)+1
where H(X) is the entropy of the source probability distribution.
This theorem implies that no uniquely decodable code can achieve an average
code length per symbol that is less than the entropy and that it is always
possible to construct a code with an average code length per symbol close to the
entropy.
Construction of Basic Source Codes: Shannon-Fano Coding
Shannon-Fano coding is one of the early techniques for constructing a prefix
code, which is a type of uniquely decodable code. The algorithm was developed
by Claude Shannon and Robert Fano.
Shannon-Fano Coding Algorithm:
1. Ordering Symbols by Probability:
 Arrange symbols in descending order of their probabilities in the
source.
2. Recursive Splitting:
 Divide the set of symbols into two parts such that the sum of
probabilities in each part is roughly equal.
 Assign a '0' to the first part and '1' to the second part.
3. Repeat:
 Recursively apply the splitting process to each part until each
symbol is assigned a binary code.

Huffman Codes:
 Purpose: Data compression.
 Construction:
 Based on the frequency of symbols in the input data.
 Frequent symbols are assigned shorter codes; less frequent symbols
get longer codes.
 Properties:
 Instantaneous codes (no code is a prefix of another).
 Optimally minimizes the average code length.
 Algorithm:
 Build a binary tree based on symbol frequencies.
 Assign binary codes based on the tree structure.
 Efficiency:
 Huffman coding is widely used for lossless data compression (e.g.,
in ZIP files).
Extended Huffman Coding:
 Purpose: Improved compression for varying-length symbols.
 Modification:
 Extension of Huffman coding to handle variable-length symbols
efficiently.
 Allows encoding of symbols with variable bit lengths.
 Use Cases:
 Applied in situations where symbols have different probabilities or
lengths.
Arithmetic Coding:
 Purpose: High-efficiency data compression.
 Principle:
 Represents entire messages with a single real number in the
interval [0, 1).
 Divides the interval based on the probabilities of symbols.
 Process:
 Map symbols to subintervals based on probabilities.
 Encode the message as the real number in the resulting subinterval.
 Efficiency:
 Arithmetic coding often achieves better compression than Huffman
coding.
Lempel-Ziv Algorithm (LZW):
 Purpose: Data compression, particularly used in file compression
formats like GIF and ZIP.
 Basic Idea:
 Replaces repeated sequences of characters with codes.
 Builds a dictionary of sequences encountered.
 Steps:
 Initialize a dictionary with individual symbols.
 Scan the input and find the longest match in the dictionary.
 Replace the match with a code, and add the new sequence to the
dictionary.
 Repeat until the entire input is processed.
 Efficiency:
 LZW is effective for compressing repetitive patterns and is widely
used in practice

You might also like