HPC Miniproject
HPC Miniproject
2024-2025
“Implement Huffman Encoding on GPU”
Submitted to the
Bachelor of Engineering
in
Computer Engineering
By
1
CERTIFICATE
This is to certify that the mini project report entitled “Implement Huffman Encoding on
GPU” being submitted by Snehal Abnave COBA03, Shubham Upadhyay COBA53, Sahil
Narale COBC12 and Shreyas Chalke COBC13 is a record of bonafide work carried out by
him/her under the supervision and guidance of Prof. P.R. Dongre in partial fulfillment of the
requirement for BE (Computer Engineering) – 2019 course of Savitribai Phule Pune
University, Pune in the academic year 2024-2025.
Date:
Place: Pune
Principal
This Mini Project report has been examined by us as per the Savitribai Phule Pune University,
Pune requirements at SINHGAD ACADEMY OF ENGINEERING Pune – 411048
2
ACKNOWLEDGEMENT
First and foremost, praises and thanks to the God, the Almighty, for showers
of blessings throughout my project work to complete the research
successfully.
I would like to express my deep and sincere gratitude to my subject teacher
Prof. P.R.Dongre for giving us the opportunity to do this project and provide
invaluable guidance throughout this project. Her dynamism, vision, sincerity,
and motivation have deeply inspired us. She has taught us the methodology to
carry out the research and to present the project works as clearly as possible.
It was a great privilege and honor to work and study under her guidance. We
are extremely grateful for what she has offered us. We would also like to
thank him for his friendship, empathy, and great sense of humor.
We are extremely grateful to all group members Snehal Abnave, Shubham
Upadhyay, Sahil Narale, and Shreyash Chalke for their dedication and
consistency towards this mini project. And also thankful for all the resources
which are provided by each group member and that played a very crucial role
in the accomplishment of this project.
Name Sign
Snehal Abnave
Shubham Upadhyay
Sahil Narale
Shreyash Chalke
3
CONTENTS
Sr. No TITLE Page no
1. Abstract 5
2. Introduction 6
3. Problem Statement 7
4. Motivation 7
5. Objectives 8
7. Conclusion 11
8. References 11
4
Abstract
Huffman Encoding is a fundamental data compression technique that reduces the size of
files without losing any data. It works by assigning shorter binary codes to frequently
occurring characters and longer codes to rare ones.
This project focuses on implementing Huffman Encoding using CUDA to leverage GPU
parallelism. The goal is to accelerate parts of the Huffman process such as character
frequency counting and data encoding. By using the parallel computation capability of the
GPU, we aim to optimize performance and reduce processing time, especially for large
inputs.
This report explains the objectives, theory, implementation strategy, output analysis, and
final conclusions based on the CUDA-based Huffman encoder we developed.
5
Introduction
Data compression plays an essential role in computer science, allowing efficient
storage and transmission of information. Huffman encoding is a popular
technique that uses variable-length codes to represent characters based on their
frequency. This project explores the GPU-accelerated version of Huffman
encoding using CUDA. The goal is to speed up parts of the process, such as
frequency counting and parallel encoding, by leveraging the parallel processing
power of modern GPUs.
6
Problem Statement
Motivation
7
Objective
Theory
1. Frequency Calculation
Count how often each character appears in the input data.
2. Build a Min-Heap (Priority Queue)
Each node in the heap represents a character and its frequency.
3. Construct the Huffman Tree
8
Repeatedly combine the two lowest-frequency nodes into a new internal
node. This forms a binary tree with frequencies as weights.
9
OUTPUT
10
Conclusion
References
11