0% found this document useful (0 votes)
20 views10 pages

Adc-Exp

Uploaded by

FS20EC008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views10 pages

Adc-Exp

Uploaded by

FS20EC008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

BHARATIYA VIDYA BHAVAN'S

SARDAR PATEL INSTITUTE OF TECHNOLOGY

MUNSHI NAGAR, ANDHERI (W), MUMBAI-58

(An Autonomous Institute Affiliated to Mumbai University)

ELECTRONICS AND TELECOMMUNICATION ENGINEERING DEPARTMENT

Prepared by:

Name : Rohan Kadam


UID: 2023201010
Class: - TE EXTC
Batch: A3
Subject : Analog and digital communication
Experiment No:2

Title: Huffman encoding and decoding

Aim: Given a text file or text message to create a dictionary and huffman encode and decode
symbols of the dictionary. To find entropy and efficiency of the coding method .

Theory:

Huffman Encoding: Huffman encoding is a lossless data compression technique that efficiently
reduces the size of data by assigning shorter binary codes to more frequently occurring symbols
and longer codes to less frequent symbols.The key concept in Huffman encoding is that
frequently used characters are encoded with shorter bit patterns, while infrequent characters are
assigned longer patterns. This helps achieve compression by reducing the average number of bits
required to represent each character in the data.

Huffman Decoding: Huffman decoding is the reverse process of encoding. It involves using the
generated Huffman dictionary to map the compressed binary sequence back to its original
characters. The decoding algorithm reads the encoded data bit by bit and traverses the Huffman
tree accordingly until it reaches a leaf node, which represents a character. The process continues
until the entire encoded message is decoded back to the original text.

Entropy: Entropy is a measure of the amount of uncertainty or randomness in the data. In the
context of data compression, entropy represents the minimum number of bits required to encode
the data, assuming an optimal encoding method. It is calculated using the formula:

Where:
P(xi) is the probability of occurrence of the iii-th symbol.
H(X) is the entropy in bits.

Efficiency: The efficiency of the Huffman coding method is defined as the ratio of the entropy
of the source to the average codeword length. It indicates how close the compression method is
to the optimal encoding (entropy). It is given by the formula:
Procedure :

1. Input a text message (solved example or any text message of your choice)
2. Input probabilities for the message
3. Generate huffman dictionary using huffmandict
4. Display dictionary and average code word length
5. Input a new text message( create using the same alphabets as in the dictionary) based on
the Dictionary generated
6. Huffmann encode using function huffmanenco display encoded output
7. Find entropy and compression ratio or efficiency
8. Decode the new message using huffmanenco to get back original data.display the
decoded output Refer to communication toolbox for huffman related functions

Code:

function huffmanAnalysis()

% Take input from the user

inputString = input('Enter the string to be encoded (can include characters or


numbers): ', 's');

% Get unique symbols and their frequencies

[uniqueSymbols, ~, idx] = unique(inputString);

frequencies = histc(idx, 1:length(uniqueSymbols));

% Calculate probabilities of each symbol

probabilities = frequencies / sum(frequencies);

% Generate Huffman dictionary

dict = huffmandict(num2cell(uniqueSymbols), probabilities);

% Encode the input string

encodedString = huffmanenco(inputString, dict);


% Decode the encoded string

decodedString = huffmandeco(encodedString, dict);

% Convert decodedString cell array to a single string

if iscell(decodedString)

decodedString = [decodedString{:}];

end

% Calculate entropy

entropy = -sum(probabilities .* log2(probabilities + eps)); % eps to avoid log2(0)

% Calculate average codeword length

avgCodeLength = 0;

for i = 1:length(dict)

avgCodeLength = avgCodeLength + length(dict{i, 2}) * probabilities(i);

end

% Calculate efficiency and compression ratio

efficiency = entropy / avgCodeLength;

compressionRatio = entropy / log2(length(uniqueSymbols));

% Display results

fprintf('Huffman Dictionary:\n');

for i = 1:length(dict)

fprintf('%s: %s\n', dict{i, 1}, num2str(dict{i, 2}));


end

fprintf('\nEncoded String (as binary array): ');

disp(encodedString);

fprintf('Decoded String: %s\n', decodedString);

fprintf('Entropy: %.4f bits\n', entropy);

fprintf('Average Codeword Length: %.4f bits\n', avgCodeLength);

fprintf('Efficiency: %.4f\n', efficiency);

fprintf('Compression Ratio: %.4f\n', compressionRatio);

end

Results:
Self study: Given an image to perform huffman encoding and decoding.

Code:

%clearing all variableas and screen

clear all;

close all;

clc;

number_of_colors = 256;

%Reading image

a=imread('peppers.png');

figure(1),imshow(a)

%converting an image to grayscale

% I=rgb2gray(a);

% Use indexed image instead of grayscale

[I, myCmap] = rgb2ind(a, number_of_colors);

%size of the image

[m,n]=size(I);

Totalcount=m*n;
%variables using to find the probability

cnt=1;

% sigma=0;

%computing the cumulative probability.

pro = zeros(256,1);

for i=0:255

k=(I==i);

count=sum(k(:));

%pro array is having the probabilities

pro(cnt)=count/Totalcount;

% sigma=sigma+pro(cnt); <-- Not needed here

% cumpro(cnt)=sigma; <-- Not needed here

cnt=cnt+1;

end

% Probablities can also be found using histcounts

pro1 = histcounts(I,0:256,'Normalization','probability');

cumpro = cumsum(pro); % if the cumulative sum is needed

sigma = sum(pro); % if the sum is needed; should always be 1.0

%Symbols for an image

symbols = 0:255;

%Huffman code Dictionary

dict = huffmandict(symbols,pro);

%function which converts array to vector

newvec = reshape(I,[numel(I),1]);

%Huffman Encodig

hcode = huffmanenco(newvec,dict);
%Huffman Decoding

dhsig1 = huffmandeco(hcode,dict);

%convertign dhsig1 double to dhsig uint8

dhsig = uint8(dhsig1);

%vector to array conversion

back = reshape(dhsig,[m n]);

%converting image from grayscale to rgb

% [deco, map] = gray2ind(back,256);

% RGB = ind2rgb(deco,map);

RGB = ind2rgb(back,myCmap);

imwrite(RGB,'decoded.JPG');

figure(2),imshow(RGB)

%end of the huffman coding


Conclusion:

We implemented Huffman encoding and decoding for a given text file, creating a symbol
dictionary and generating efficient binary codes. The entropy of the source and coding efficiency
were calculated, confirming Huffman encoding as an effective method for data compression.The
results confirmed that Huffman encoding is an efficient and optimal approach for lossless data
compression, achieving near-optimal performance based on the calculated entropy.

You might also like