Digital Communication Lab Project
Digital Communication Lab Project
Title page
Theoretical analysis
Block diagram
Project procedure
Simulation Results
Discussion
Improvement Ideas
P a g e 1 | 22
the quick brown fox jumped over the lazy dog
P a g e 2 | 22
▪ Theoretical Analysis:
A digital communication system is designed to transmit information efficiently and accurately from
a source to a destination over a communication channel. This system consists of multiple
processing stages that ensure data integrity, minimize redundancy, and protect against
transmission errors. A typical digital communication system follows a structured pipeline that
includes source encoding, channel encoding, modulation, transmission through a noisy
channel, demodulation, channel decoding, and source decoding.
i. Source Encoding: Source encoding is the process of converting the original message into
a compressed form to reduce redundancy and optimize bandwidth utilization. This is
typically achieved through lossless or lossy compression techniques. Lossless methods,
such as Huffman coding, ensure that the original message can be perfectly reconstructed
at the receiver, while lossy methods, such as JPEG or MP3 compression, sacrifice some
information for higher compression efficiency. The primary goal of source encoding is to
represent the data using the least number of bits without losing essential information.
ii. Channel Encoding: Once the message is efficiently encoded, channel encoding
introduces redundant bits into the data stream, allowing the receiver to detect and
sometimes correct errors. Common error-detecting and correcting codes include:
· Parity Check Codes – Simple error detection using an extra parity bit.
· Hamming Codes – Allow single-bit error correction and multiple-bit error detection.
· Cyclic Redundancy Check (CRC) – Used primarily for error detection in network
communications.
· Linear block code – Fixed-length message block is encoded into a longer fixed-length
codeword by matrix multiplication.
iii. Line Coding: Line coding is a technique used to convert digital data (bits) into electrical
signals(voltage or transition ) for transmission over a communication channel. Line coding
helps with signal timing and error detection (by replacing long string of 0’s or 1’s). some of
the most popular line coding techniques are:
➢ Unipolar/polar/bipolar
➢ Return to zero/Non-return to zero
➢ Manchester/ Differential Manchester
➢ Miller
➢ Binary 3 zeros substitution
iv. Modulation: The encoded message is then converted into a form suitable for transmission
through the communication channel. This process is known as modulation, where the
digital signal (binary data) is mapped onto an analog carrier wave to enable transmission
over physical media such as radio waves, fiber optics, or coaxial cables. Common digital
modulation schemes include:
P a g e 3 | 22
· Amplitude Shift Keying (ASK) – Represents binary data using variations in amplitude.
· Phase Shift Keying (PSK) – Uses phase shifts to encode information, such as Binary Phase
Shift Keying (BPSK) and Quadrature Phase Shift Keying (QPSK).
· In BPSK demodulation, the receiver detects the phase shifts in the received signal and maps
them back to binary values.
· In FSK demodulation, frequency differences are analyzed to retrieve the original bit stream.
Demodulation ensures that the digital sequence is recovered correctly, despite the potential
distortions caused by noise in the channel.
vii. Channel Decoding: After demodulation, the received data undergoes channel decoding
to correct any errors introduced during transmission. Channel decoding applies the same
error detection and correction techniques as channel encoding, allowing the system to
recover the original data with high accuracy.
· If Hamming Code was used in encoding, Hamming Decoding is performed to correct single-bit
errors.
· If Convolutional Codes were applied, Viterbi decoding is used to recover the original bit
stream.
viii. Source Decoding: The final step in the communication system is source decoding, which
reconstructs the original message from its encoded form. This is the inverse of source
encoding and ensures that the transmitted information is recovered without loss (in the
case of lossless compression) or with acceptable degradation (in lossy compression). The
reconstructed message is then delivered to the end user or application.
In this project, the digital communication system is implemented through a series of processing
steps designed to ensure efficient transmission and error correction. The process begins with
Huffman Encoding for source compression, followed by the application of Hamming Code for
channel error correction. The encoded message is then line-coded using Miller Encoding to prepare
it for transmission. The signal is modulated using BPSK Modulation to make it suitable for the
P a g e 4 | 22
communication channel. The system simulates a real-world environment by transmitting the
modulated signal over an AWGN channel, introducing noise and distortions. On the receiver side,
the signal undergoes demodulation, line decoding, channel decoding, and source decoding to
recover the original message. Finally, the system's performance is evaluated by comparing the
output with the original message, highlighting the effectiveness of error detection and correction
techniques employed throughout the system.
▪ Block diagram
To visualize the entire process of the message transmission and reception system, a block
diagram is provided below. This diagram outlines the various stages involved in the
transmission of a message, from the encoding process at the transmitter to the
decoding process at the receiver.
AWGN
Huffman Hamming Miller BPSK
message decoding encoding decoding demodulation
P a g e 5 | 22
▪ Project procedure
❖ Huffman Encoding:
Huffman encoding is a data compression technique used to reduce data size without losing any
of its details. This coding scheme generally assigns variable-length codes to characters based
on their frequency. The most frequently occurring characters get shorter codes, while less
frequent ones get longer codes, ensuring an optimal representation of data.
In this technique-
Where r=2 for binary bits, r=3 for ternary bits and r=4 for quaternary bits.
❖ Huffman Decoding
The decoding process is the inverse of encoding. Here, Huffman-encoded binary sequence is
converted back to original bit stream using the Huffman tree.
P a g e 6 | 22
1. Start at root of the Huffman tree
2. Code for each character is found by moving backward
3. For every 1 we traverse towards the botttom side and for every 0 we traverse towards the top
one
4. Once a leaf node is reached, retrieve the corresponding character
5. Restart from the root and continue reading bits until all encoded data is decoded
❖ Hamming Encoding:
In Single Error correction Hamming Encoding r redundant bits are added with m data bits
following the formula:
2𝑟 ≥ m+r+1
at first data bits are positioned in slots except positions 2x (1,2,4, 8…)
D3 D5 D6 D7 D9 …
Then parity bits are calculated following this rule for x-th parity bit taking 2x bits and skipping
2x bits and again taking 2x bits, and XORing them
P1= D1 ⊕ D3 ⊕ D5 ⊕ D9
P2= D3 ⊕ D6 ⊕ D7
P4= D5 ⊕ D6 ⊕ D7
P8= D9
P1 P2 D3 P4 D5 D6 D7 P8 D9 …
P a g e 7 | 22
At the receiver end, the parity bits are recalculated by XOR operation and compared with the
received parity bits. If they are equal , no error occurred. But if they are not equal , recalculated
parity binary number is converted into a decimal number which points the error position.
After correcting this error bit, parity bits are eliminated and data bits are sent to next step of
processing.
It is called single-error correction double-error detection coding because if more than one bit was
flipped, the exact position of error cannot be determined. But the receiver end can understand that
2 or more bits were flipped, and uncontrollable error has occurred.
❖ Miller Encoding:
Miller encoding is a technique used in digital communication where the signal is encoded in such a
way that it helps in synchronization between the sender and the receiver, without needing a
separate clock.
P a g e 8 | 22
• How it works:
o When transmitting a "1", a transition is made in the middle of the bit period.
o For a "0", the signal remains at the same level if previous bit was 1
This method ensures that the signal has enough changes for the receiver to align with the data,
even if the bit stream is long.
Miller Decoding:
Miller decoding is the process of interpreting the encoded signal to retrieve the original data.
• How it works:
o First a moving average filter is applied to the received signal to smooth out the noise.
The key to decoding is simply checking whether a transition occurred at the expected times for
each bit period.
❖ BPSK :
BPSK is a digital modulation technique where the phase of a carrier signal is shifted by 180° (π
radians) based on binary data (0 or 1).
P a g e 9 | 22
Mathematical Representation:
𝜃= 0O and 180O
where:
• f𝑐 = carrier frequency
• θ=0 for binary ‘0’, and π\theta = π for binary ‘1’ (180° phase shift)
The received BPSK signal is multiplied with the synchronized carrier to extract the modulated
information.
For each bit duration, average of the received signal values over the entire bit period is calculated.
The averaged value is compared with a predefined threshold (0). If the value is greater than or equal
to the threshold, detect bit 0, otherwise detect bit 1.
P a g e 10 | 22
▪ MATLAB code of individual blocks
Huffman encoding block:
P a g e 11 | 22
Hamming encoding block:
P a g e 12 | 22
Miller encoding block:
P a g e 13 | 22
BPSK encoding block:
P a g e 14 | 22
Miller decoding block:
P a g e 15 | 22
Hamming decoding block:
P a g e 16 | 22
▪ Simulation Results
Collected results for snr=1
P a g e 17 | 22
P a g e 18 | 22
P a g e 19 | 22
P a g e 20 | 22
But it faces issues when faced with a very high noise power AWGN channel:
P a g e 21 | 22
▪ Discussion:
Psd shows a very sharp peak, showing a very narrow band signal
Coherent BPSK demodulation using a local carrier is done to extract the transmitted data.
Well defined opening in eye diagram shows high noise immunity and minimal intersymbol
interference.
▪ Improvement Ideas
o Arithmetic Coding: More efficient than Huffman for small symbol sets
o For stronger error correction Low-Density Parity Check codes can be used
o Bipolar line coding schemes could be utilized for their self error correcting
capabilities
P a g e 22 | 22