0% found this document useful (0 votes)
14 views17 pages

HTCS501 Unit 4

The document outlines a Cyber Security Honours Degree program, focusing on key concepts and techniques for protecting digital assets from cyber threats. It includes a syllabus on Data Encryption and Compression, detailing the importance of data compression, its techniques, and coding methods. Additionally, it discusses major communication models, emphasizing their relevance in understanding information flow between senders and receivers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views17 pages

HTCS501 Unit 4

The document outlines a Cyber Security Honours Degree program, focusing on key concepts and techniques for protecting digital assets from cyber threats. It includes a syllabus on Data Encryption and Compression, detailing the importance of data compression, its techniques, and coding methods. Additionally, it discusses major communication models, emphasizing their relevance in understanding information flow between senders and receivers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

CYBER SECURITY HONOURS’ DEGREE

Ansh Kasaudhan

2024-2025

Cyber Security

2
INTRODUCTION

This cybersecurity explores key concepts and


techniques for protecting digital assets from
cyber threats. Through practical exercises,
we examine network security, encryption,
intrusion detection, and ethical hacking. The
report aims to enhance understanding of
vulnerabilities, threat mitigation, and security
protocols essential for safeguarding
information systems. By applying theoretical
knowledge to real-world scenarios, this lab
underscores the critical importance of robust
cybersecurity measures in maintaining the
integrity and confidentiality of digital data.

PAGE 2

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


3

Syllabus

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


4

Unit 4

Data Compression
1. Introduction to Data Compression
Data compression is a technique used to reduce the size of digital data, making it easier to store and transmit
efficiently. It involves transforming data into a compact format while maintaining its essential information.
Compression techniques are widely used in multimedia (images, audio, and video), networking (file
transfers, web pages), and storage systems (databases, cloud storage).

Key Aspects:

• Reduces storage requirements.


• Speeds up data transmission.
• Minimizes costs associated with data storage and bandwidth.

2. Need for Data Compression


Data compression is essential for optimizing resources in computing and communication. The primary
reasons for using compression techniques are:

a) Storage Optimization

• Reduces file sizes, allowing more data to be stored in limited space.


• Used in databases, cloud storage, and personal devices.

b) Bandwidth Efficiency

• Smaller files require less bandwidth, improving transmission speed.


• Crucial for streaming services (Netflix, YouTube) and cloud applications.

c) Cost Reduction

• Saves infrastructure costs by minimizing storage and data transfer expenses.


• Companies like Google and Amazon use compression to optimize cloud storage.

d) Improved Performance

• Reducing data size leads to faster loading times in web applications.


• Enhances performance in real-time systems like gaming and video conferencing.

e) Data Integrity and Security

• Compression improves backup efficiency, making it easier to store and retrieve data.
• Some encryption techniques use compressed data to improve security.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


Fundamentals of Data Compression 5

Data compression is based on the principle of representing information in a more efficient way by
eliminating redundancy. The core fundamentals include redundancy removal, entropy, lossless and lossy
compression, and compression ratio.

1. Redundancy in Data
Redundancy refers to repetitive or unnecessary information present in data. Removing this redundancy helps
in compressing data without losing essential information.

Types of Redundancy

a) Spatial Redundancy

• Occurs when neighbouring pixels or data values are similar.


• Common in images and video compression.
• Example: In an image with a uniform background, the pixel values are nearly identical.
Compression algorithms like JPEG exploit this redundancy.

b) Temporal Redundancy

• Occurs when data is repeated over time.


• Found in video and audio streams where consecutive frames or sound samples are similar.
• Example: In a video, successive frames often contain the same background. MPEG video
compression removes redundant frames.

c) Statistical Redundancy

• Occurs when some symbols in a dataset appear more frequently than others.
• Used in Huffman Coding and Arithmetic Coding to assign shorter codes to more frequent
symbols.
• Example: In English text, letters like "e" and "t" appear more often, so they get shorter codes in
Huffman compression.

d) Coding Redundancy

• Happens when fixed-length encoding wastes space.


• Variable-length coding techniques like Huffman Coding remove this redundancy.
• Example: Instead of using 8 bits to represent each character in a text file, Huffman Coding assigns
shorter codes to frequent characters.

e) Psycho-Visual Redundancy (Perceptual Redundancy)

• Some details in images, audio, and video are imperceptible to the human eye or ear.
• Lossy compression removes these unnoticeable details.
• Example: MP3 removes frequencies that humans cannot hear, and JPEG removes high-frequency
details not easily noticed.

2. Entropy in Data Compression


Entropy is a measure of randomness or unpredictability in data, introduced by Claude Shannon in
information theory.
HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN
a) Definition 6

• It represents the minimum number of bits needed to encode a message without loss.
• A dataset with high entropy has less redundancy and is harder to compress.
• A dataset with low entropy has more predictable patterns, making it easier to compress.

b) Formula for Entropy

For a set of symbols with probabilities pi, entropy is given by:

H = − ∑ pi log2 pi

where:

• H = entropy (in bits per symbol)


• pi = probability of occurrence of symbol i

c) Example of Entropy Calculation

Consider a text file with four characters:

Symbol Probability pi − ∑ pi log2 pi


A 0.5 0.5
B 0.25 0.5
C 0.125 0.375
D 0.125 0.375

Total Entropy H = 1.75 bits/symbol


This means we need at least 1.75 bits per symbol for the most efficient encoding.

d) Practical Use of Entropy

• Huffman Coding uses entropy to generate optimal codes.


• Arithmetic Coding assigns a single number based on entropy.
• A perfectly compressed file has entropy equal to the average bit-length per symbol.

3. Lossless vs. Lossy Compression


a) Lossless Compression

• No data is lost during compression.


• Original data can be perfectly reconstructed.
• Used for text files, medical images, financial data, and software storage.

Examples of Lossless Techniques:

1. Huffman Coding – Assigns variable-length codes based on frequency.


2. Run-Length Encoding (RLE) – Replaces repeated sequences with a count.
3. Lempel-Ziv (LZ77, LZW) – Dictionary-based techniques used in ZIP and PNG files.

Example of Lossless Compression (RLE)

Original Data: AAAAABBBBCCCCDDDDDDDD


Compressed: 5A4B4C8D

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


b) Lossy Compression 7

• Some data is permanently lost to achieve higher compression.


• Used in images, audio, and video, where perfect reconstruction is unnecessary.

Examples of Lossy Techniques:

1. JPEG – Compresses images by removing fine details.


2. MP3 – Removes inaudible sound frequencies.
3. H.264 / AV1 – Reduces video file sizes using frame prediction.

Example of Lossy Compression (JPEG)

• Converts an image to the YUV color space.


• Uses Discrete Cosine Transform (DCT) to remove high-frequency components.
• Applies quantization to reduce data size.

4. Compression Ratio
Compression ratio measures how effectively data is compressed.

a) Formula for Compression Ratio

Compression Ratio = Original Size / Compressed Size

• A higher compression ratio means better compression.


• A compression ratio of 2:1 means the file is reduced to half its original size.

b) Example Calculation

Original File Size = 10 MB


Compressed File Size = 2 MB

Compression Ratio=10/2=5:1

This means the file is compressed to 20% of its original size.

c) Trade-offs in Compression Ratio

• Lossless compression achieves lower ratios but retains all data.


• Lossy compression achieves higher ratios but loses some details.

Fundamental Concepts of Coding in Data


Compression
Coding plays a crucial role in data compression by efficiently representing data using fewer bits. Different
coding techniques help reduce redundancy and improve compression efficiency. The fundamental concepts
of coding in data compression include variable-length encoding, entropy coding, dictionary-based
coding, and transform coding.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


1. Types of Coding Techniques in Compression 8

a) Variable-Length Coding (VLC)

• Assigns shorter codes to more frequent symbols and longer codes to less frequent ones.
• Reduces average code length, leading to better compression.
• Used in Huffman Coding and Shannon-Fano Coding.

Example:

Symbol Frequency Huffman Code


A 5 10
B 9 0
C 12 110
D 13 111

Here, more frequent symbols get shorter codes, reducing overall file size.

b) Entropy Coding

• Uses probability distribution to assign optimal codes.


• Minimizes the number of bits required to represent a message.
• Examples: Huffman Coding, Arithmetic Coding.

Key Properties:

• Huffman Coding: Assigns shorter codes based on symbol frequency.


• Arithmetic Coding: Represents an entire message as a single fractional number in [0,1], achieving
better compression than Huffman Coding in some cases.

Example (Arithmetic Coding for “AB”):


If:

• A = [0, 0.6]
• B = [0.6, 1.0]
Then, encoding "AB" results in a single number between 0.36 - 0.42.

c) Dictionary-Based Coding

• Instead of encoding individual symbols, common sequences (patterns) are stored in a dictionary
and referenced by shorter codes.
• Efficient for large text-based data with repeated patterns.
• Used in Lempel-Ziv (LZ77, LZW), DEFLATE (ZIP, PNG compression).

Example (LZW Encoding of “ABABABA”):

1. Initial dictionary: {A: 1, B: 2}


2. “AB” → New entry in dictionary → Code: 3
3. “BA” → New entry in dictionary → Code: 4
4. Encoded output: 1,2,3,3

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


d) Run-Length Encoding (RLE) 9

• Compresses sequences of repeated symbols by storing the symbol and its count.
• Best suited for images with large uniform regions (e.g., black-and-white images, simple text).

Example (RLE Encoding of "AAABBBCCCC")


Original: AAABBBCCCC
Compressed: 3A3B4C

Application:

• Used in TIFF, BMP images and fax transmissions.

e) Transform Coding

• Converts data from one form to another before compression.


• Commonly used in lossy compression methods (e.g., JPEG, MP3).
• Examples: Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT).

How It Works (JPEG Compression Example):

1. Convert RGB to YUV (Color Space Transformation).


2. Apply DCT to Image Blocks → Converts spatial data to frequency components.
3. Quantization → Removes less important high-frequency components.
4. Entropy Coding (e.g., Huffman Coding) to compress data further.

2. Key Properties of Coding Techniques


a) Uniquely Decodable Codes

• Ensures that a code sequence can be decoded in only one way.


• Example: Huffman Codes are uniquely decodable, but simple fixed-length codes may not be.

b) Prefix Codes

• No code word is a prefix of another code word, preventing ambiguity during decoding.
• Huffman Coding is a prefix code.

Example:

Symbol Code
A 0
B 10
C 110
D 111

Since no code is a prefix of another, decoding is efficient.

c) Compression Ratio and Code Efficiency

• Compression efficiency depends on how closely the code length matches the entropy of the
source.
• Formula for Code Efficiency: Efficiency=Entropy/ Average Code Length.
o Higher efficiency means better compression.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


o If entropy = 2.3 bits and average code length = 2.5 bits, efficiency = 92%. 10

3. Comparison of Coding Techniques


Feature Huffman Arithmetic LZW RLE Transform
Coding Coding Coding
Type Entropy Coding Entropy Coding Dictionary Run-Length Lossy
(Transform)
Lossless? Yes Yes Yes Yes No
Best For General text & Complex data Text, ZIP, Repeated Images, Audio,
images patterns PNG patterns Video
Complexity Moderate High Low Very Low High
Efficiency High Very High Medium Low Very High

4. Applications of Coding in Compression


a) File Compression

• ZIP, RAR (use Huffman + Lempel-Ziv)


• PNG (LZ77 + Huffman Coding)

b) Image Compression

• JPEG (DCT + Huffman Coding)


• GIF (LZW)

c) Audio Compression

• MP3 (Psychoacoustic Model + Huffman Coding)


• AAC (Advanced Audio Coding) (More efficient than MP3)

d) Video Compression

• H.264, AV1 (Motion Compensation + Huffman Coding)


• MPEG-4 (Entropy Coding + Transform Coding)

Major Communication Models and Their Modes


Communication models help in understanding how information flows from a sender to a receiver. These
models provide a structured way to analyze the effectiveness, challenges, and dynamics of communication.
The major communication models are categorized into Linear, Interactive, and Transactional models,
each highlighting different aspects of the communication process.

1. Linear Models of Communication


Linear models describe one-way communication, where the sender transmits a message to the receiver
without expecting immediate feedback. These models are useful for situations like broadcasting, speeches,
and advertisements.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


a) Aristotle’s Communication Model (384–322 BC) 11

• One of the earliest models, focused on persuasive communication.


• Components:
Speaker → Message → Audience
• This model highlights that the effectiveness of communication depends on the credibility of the
speaker, the emotional appeal of the message, and the persuasiveness of the arguments.
• Example: A politician delivering a speech to influence voters.

b) Lasswell’s Communication Model (1948)

• Introduces a 5-question framework to analyze communication:


Who? → Says what? → In which channel? → To whom? → With what effect?
• This model is widely used in mass communication, advertising, and propaganda studies.
• Example: A newspaper publishes an article (Who?), about a political event (Says what?), through
print media (In which channel?), to readers (To whom?), influencing public opinion (With what
effect?).

c) Shannon-Weaver Model (1949)

• One of the most technical communication models, focusing on how messages are transmitted
over channels and affected by noise.
• Components:
Sender → Encoder → Channel (with noise) → Decoder → Receiver
• Introduces "noise", which refers to any distortion or interference that disrupts the communication
process.
• Example: A phone conversation with poor signal quality, where words get distorted due to
background noise.

d) Berlo’s S-M-C-R Model (1960)

• Expands the linear model by adding four key elements:


Source (Sender) → Message → Channel → Receiver
• Emphasizes factors affecting communication, such as communication skills, attitudes, and social
systems of both the sender and receiver.
• Example: A professor giving a lecture to students. The professor’s teaching skills and the students’
understanding ability influence the effectiveness of communication.

2. Interactive Models of Communication


Unlike linear models, interactive models consider feedback, making communication a two-way process.
They highlight the role of response, interpretation, and environmental factors in communication.

a) Osgood-Schramm Model (1954)

• Describes communication as a circular process where the sender and receiver continuously switch
roles.
• Key Concept: Both sender and receiver encode, decode, and interpret messages dynamically.
• Example: A WhatsApp conversation where two friends are alternatively sending and receiving
messages, making communication more interactive.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


b) Westley and Maclean Model (1957) 12

• Introduces the concept of "Gatekeeping", where a third party (like a journalist or editor) filters,
modifies, or controls the message before it reaches the audience.
• This model is particularly relevant to mass media, news reporting, and social media algorithms.
• Example: A news editor selecting which political stories to publish and which to leave out, shaping
public perception.

3. Transactional Models of Communication


Transactional models describe communication as a simultaneous process where both the sender and
receiver continuously influence each other’s responses. These models acknowledge that communication is
complex, evolving, and shaped by social and psychological factors.

a) Barnlund’s Transactional Model (1970)

• Emphasizes that both sender and receiver participate actively, sending and receiving messages at
the same time.
• Introduces verbal and non-verbal communication (gestures, facial expressions, tone, etc.).
• Example: A live debate where speakers interrupt, respond, and react in real time, shaping the
conversation dynamically.

b) Dance’s Helical Model (1967)

• Describes communication as an evolving and continuous process, much like a spiral (helix).
• Messages build on past experiences and previous interactions, influencing how future
communication occurs.
• Example: A teacher gradually improving their communication with students over a semester based
on past feedback and interactions.

Compression Ratio in Data Compression


What is Compression Ratio?
Compression ratio is a key metric used to measure the effectiveness of data compression techniques. It
quantifies how much a file or data set has been reduced in size after compression. A higher compression
ratio means better compression efficiency.

Formula for Compression Ratio

Compression Ratio=Original Size / Compressed Size

Alternatively, it can also be expressed as a percentage:

Compression Percentage=(1− (Compressed Size / Original Size))×100

Example Calculation

• Original File Size = 10 MB


• Compressed File Size = 2 MB
• Compression Ratio = 10 / 2=5:1
HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN
• Compression Percentage = (1−(2 /10)) × 100=80% 13

This means the compressed file is 5 times smaller than the original file, or 80% of the original data was
removed while retaining useful information.

Types of Compression and Their Effect on Compression Ratio


1. Lossless Compression

• Preserves all original data without losing any information.


• Compression ratios are typically lower (e.g., 2:1 or 3:1).
• Used for text files (ZIP, PNG, FLAC, GIF, LZW, Huffman Coding, Arithmetic Coding, etc.).
• Example:
o A 10 MB text file compressed to 5 MB → Compression Ratio = 2:1

2. Lossy Compression

• Some data is permanently discarded to achieve higher compression.


• Compression ratios are higher (e.g., 10:1, 50:1).
• Used for multimedia files (JPEG, MP3, MP4, AAC, H.264, etc.).
• Example:
o A 50 MB image compressed to 2 MB → Compression Ratio = 25:1

Factors Affecting Compression Ratio


1. Data Type:
o Text and executable files have lower compression ratios.
o Images, audio, and video allow higher compression ratios with lossy compression.
2. Compression Algorithm:
o Huffman coding, LZW (lossless): Moderate compression.
o JPEG, MP3, H.264 (lossy): High compression with some data loss.
3. Redundancy in Data:
o Highly repetitive data (e.g., a black-and-white image) achieves higher compression.
o Random or already compressed data (e.g., encrypted files) cannot be significantly
compressed.
4. Quality Settings (For Lossy Compression):
o Higher compression leads to lower quality (e.g., low-bitrate MP3 sounds worse).
o Balancing file size and quality is crucial in lossy compression.

Applications of Compression Ratio


• File Storage: Reducing storage space requirements (ZIP, RAR, 7z).
• Media Streaming: Optimizing video/audio for faster loading (YouTube, Netflix, Spotify).
• Data Transmission: Reducing bandwidth usage for faster internet speeds.
• Cloud Computing: Efficient data transfer and storage optimization.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


Requirements of Data Compression 14

Data compression is essential for optimizing storage, transmission, and processing efficiency. To ensure
effective compression, several key requirements must be met. These requirements vary depending on
whether the compression method is lossless or lossy and the specific application.

1. Efficiency in Compression
• The compression algorithm should significantly reduce file size while maintaining acceptable
quality.
• The compression ratio should be high, especially for applications where storage or bandwidth is
limited.
• Example: ZIP compression reduces text file size effectively while maintaining data integrity.

2. Fast Encoding and Decoding


• The compression (encoding) and decompression (decoding) speed should be fast, especially for
real-time applications.
• Compression algorithms must strike a balance between processing speed and compression
efficiency.
• Example: Video streaming services use H.264 and H.265 codecs for real-time compression and
playback.

3. Minimal Data Loss (For Lossless Compression)


• Lossless compression must preserve the original data without any information loss.
• Essential for applications like text compression (ZIP, GZIP), medical imaging (DICOM), and
financial data storage.
• Example: PNG image format uses lossless compression to retain pixel-perfect accuracy.

4. Acceptable Quality Loss (For Lossy Compression)


• Lossy compression should remove redundant data while maintaining acceptable visual or audio
quality.
• Common in multimedia applications (JPEG for images, MP3 for audio, H.264 for video).
• Example: YouTube compresses videos to lower file sizes while keeping video quality visually
acceptable.

5. Adaptability to Different Data Types


• The compression algorithm should be optimized for specific data types:
o Text compression: Huffman Coding, LZW, Arithmetic Coding
o Image compression: JPEG, PNG, WebP
o Audio compression: MP3, AAC, FLAC
o Video compression: H.264, AV1
• Example: MP3 is excellent for audio compression but inefficient for compressing text files.

6. Reduced Storage and Bandwidth Usage


• Compression should significantly reduce storage requirements and decrease network bandwidth
usage.
• Essential for cloud storage, file sharing, and data transfer over the internet.
HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN
• Example: Cloud services like Google Drive and Dropbox automatically compress uploaded images
15
and videos.

7. Error Handling and Robustness


• Some compression methods should include error detection and correction to handle data
corruption during transmission.
• Important for network communication and satellite transmissions.
• Example: ZIP file format includes checksums to verify data integrity after decompression.

8. Scalability and Future Compatibility


• Compression techniques should be scalable for large files and evolving technologies.
• Should support future advancements without major rework.
• Example: H.265 (HEVC) was developed to replace H.264 for more efficient video compression in
4K/8K streaming.

9. Security and Encryption Support


• Compression should integrate encryption techniques for secure data transmission.
• Critical for sensitive information like passwords, banking data, and confidential files.
• Example: Secure ZIP compression with AES-256 encryption protects sensitive files.

10. Energy Efficiency (For Mobile & IoT Devices)


• Compression should consume minimal processing power to improve battery life and reduce energy
consumption.
• Important for mobile devices, embedded systems, and IoT applications.
• Example: WebP image format is optimized for low-power mobile devices, reducing page load
times.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


Classification of Data Compression 16

Data compression is the process of reducing the size of a file or data set to optimize storage and
transmission. It is broadly classified into two main categories:

1. Lossless Compression – Retains all original data.


2. Lossy Compression – Discards some data to achieve higher compression.

Each category has different techniques suited for specific applications, such as text, images, audio, and
video.

1. Lossless Compression
Lossless compression preserves all the original data, ensuring that the decompressed data is identical to
the original. This is essential for applications where data integrity is crucial, such as text files, software, and
medical imaging.

Key Features

✔ No loss of information.
✔ Lower compression ratio (typically 2:1 to 5:1).
✔ Used in text, executable files, and high-precision data.

Common Lossless Compression Techniques

a) Run-Length Encoding (RLE)

• Replaces repeated characters with a count.


• Example:
o Original: AAAAABBBBCCCC
o Compressed: 5A4B4C

b) Huffman Coding

• Assigns shorter binary codes to frequently used symbols and longer codes to rare symbols.
• Used in ZIP files and PNG images.

c) Lempel-Ziv-Welch (LZW) Compression

• Finds repeating patterns and replaces them with dictionary entries.


• Used in GIF, TIFF, and ZIP formats.

d) Arithmetic Coding

• Represents an entire message as a single number in a fractional range.


• Used in JPEG-LS and video compression.

e) Burrows-Wheeler Transform (BWT)

• Rearranges data for better compression efficiency.


• Used in bzip2 compression.

HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN


2. Lossy Compression 17

Lossy compression removes some data to achieve much higher compression ratios, often at the cost of
quality. It is commonly used in multimedia applications, such as images, audio, and video, where slight
data loss is acceptable.

Key Features

✔ Higher compression ratio (10:1 to 100:1).


✔ Some data is discarded permanently.
✔ Used in audio, video, and image compression.

Common Lossy Compression Techniques

a) Transform Coding (Discrete Cosine Transform - DCT)

• Converts image/audio data into frequency components.


• Used in JPEG and MP3 formats.

b) Discrete Wavelet Transform (DWT)

• Used in advanced image compression like JPEG 2000.

c) Predictive Coding

• Predicts the next data value and stores only the difference.
• Used in video compression (H.264, H.265).

d) Perceptual Coding

• Removes inaudible sound frequencies.


• Used in MP3, AAC, and Opus audio formats.

e) Fractal Compression

• Uses fractal patterns to reconstruct images.


• Applied in high-resolution image compression.

3. Hybrid Compression
Some modern techniques combine lossless and lossy methods for better efficiency.

• JPEG uses lossy compression but applies lossless Huffman coding for final encoding.
• H.264 video compression uses lossy quantization but lossless entropy coding (CABAC).

Comparison Table: Lossless vs. Lossy Compression


Feature Lossless Compression Lossy Compression
Data Integrity 100% preserved Some data is lost
Compression Ratio Low (2:1 - 5:1) High (10:1 - 100:1)
File Size Reduction Moderate Significant
Reversibility Fully reversible Irreversible
Examples ZIP, PNG, FLAC JPEG, MP3, MP4
Best for Text, software, medical images Images, audio, video
HTCS 501: DATA ENCRYPTION AND COMPRESSION ANSH KASAUDHAN

You might also like