0% found this document useful (0 votes)
18 views40 pages

Image Process

image process

Uploaded by

alula girma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views40 pages

Image Process

image process

Uploaded by

alula girma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Computer Vision and Image processing

Chapter six
Image Compression
2
Introduction and Overview

.
 Data compression refers to the process of reducing the amount
of data required to represent a given quantity of information.
 Image compression is the process of encoding or converting an
image file in such a way that it consumes less space than the
original file.
3
 Image compression is typically performed through an image/data compression
algorithm or codec.

 Typically such codecs/algorithms apply different techniques to reduce the


image size, such as by:
 Specifying all similarly colored pixels by the color name, code and the number
of pixels. This way one pixel can correspond to hundreds or thousands of
pixels.
 The image is represented using mathematical representation.
.
Applications 4

 Applications that require image compression are many and varied


such as:
 Internet,
 Businesses,
 Multimedia,
 Satellite imaging
 Medical imaging
 Without compression, most of these applications would not be
feasible!
5
 Compression algorithm development starts with applications to two-
dimensional (2-D) still images.
 Aft er the 2 - D methods are developed, they are
oft en extended to video (motion imaging)
 However, we will focus on image compression of single frames of
image data.
Cont’d… 6
 Image compression involves reducing the size of image data files, while
retaining necessary information

 Retaining necessary information depends upon the application

 Image segmentation methods, which are primarily a data reduction


process, can be used for compression
7
Data vs Information

 Data are the means by which information is conveyed.


 Various amounts of data may be used to represent the same
amount of information.
 Information is processed, contextualize and attached meaning
8
 The reduced file created by the compression process is called the
compressed file and is used to reconstruct the image, resulting in
the decompressed image

 The original image, before any compression is performed, is called


the uncompressed image file

 The ratio of the original, uncompressed image file and the


compressed file is referred to as the compression ratio.
9
Why do we need Image Compression? 10
 To removed redunduncy among data.
 In case of image processing we need to avoid redunduncy of pixel which have
the same grey level value.
 The main aim of image compression is to reduce size of image during storage
and transmission
Data Redundancy 11
 Computer store the images in pixel values so sometimes image has
duplicate pixel values or maybe if we remove some of the pixel
values they don’t affect the information of an actual image.
 Data Redundancy is one of the fundamental component of data
compression.
12
1. Coding Redundancy
13

 Coding redundancy is associated with the representation of


information.
 The information is represented in the form of codes.
 If the gray levels of an image are coded in a way that uses more code symbols
than absolutely necessary to represent each gray level.
 Then the resulting image is said to contain coding redundancy
2. Inter-Pixel Redundancy 14

 Interpixel redundancy is due to the correlation between the


neighboring pixels in an image.
 That means neighboring pixels are not statistically independent.
 The value of any given pixel can be predicated from the value of its
neighbors that is they are highly correlated.
 The information carried by individual pixel is relatively small.
 To reduce the interpixel redundancy the difference between
adjacent pixels can be used to represent an image.
15
In spatial redundancy there is a correlation between the
neighboring pixel values.
16
17
18
Elements of Information Theory 19
 Information theory defines information based on the probability of an event,
knowledge of an unlikely event has more informati on than knowledge of a
likely event
 For example:
 The earth will continue to revolve around the sun; little information, 100%
probability
 An earthquake will occur tomorrow; more info. Less than 100%
probability
 A matter transporter will be invented in the next 10 years; highly unlikely –
low probability, high information content
 This perspective on information is the information theoretic definition and should not
be confused with our working definition that requires information in images to be
useful, not simply novel
Entropy 20
 Entropy is the measurement of the average information in an image
Image Compression Methods 21

1. Lossy compression

 It reduces an image file size by permanently removing less


critical information, particularly redundant data.

 It is can significantly reduce file size, but it can also reduce image
quality to the point of distortion, especially if the image is overly
compressed.
Challenge with Lossy Compression 22
 Is that it's irreversible.
 Once it has been applied to an image, that image can never be restored
to its original state.
 If lossy compression is applied repeatedly to the same image, it gets
increasingly distorted.
 Lossy compression has proved to be a valuable strategy for the web,
where a moderate amount of image degradation can often be tolerated.
 The most common example of lossy compression is JPEG, an image
compression format used extensively on the web and in digital
photography.
2. Lossless Compression 23

 This method applies compression without removing critical data or


reducing image quality and results in a compressed image that can be
restored to its original state with no degradation or distortion.

 One of the most common lossless formats is PNG, a widely used


format that reduces file size by identifying patterns and compressing
those patterns together.
Lossless Compression Algorithm 24
25
26
27
28
29
30
31
32

 In the example, we observe a 2.0 : 1.9 compression, which is about a 1.05


compression ratio, providing about 5% compression

2bit/ 1.9bit= 1.05


2 bit=100%
1.9bit = ? 1.9*100/2=95
100-95=5 hence this algorithm is compressed 5%
 From the example we can see that the Huffman code is highly dependent on
the histogram, so any preprocessing to simplify the histogram will help
improve the compression ratio.
33
34
Dictionary based coding
Dictionary based 36
coding
39

39 39 39 256 39-39
39 126 39 257 39-126
126 39 126 258 126-39
39 39
39-39 126 256 259 39-39-126
126 39
126-39 39 258 260 126-39-39
39 126
126 126
38

 Generate the final encoded output

 after applaying the algorithm we get pixel with 9 bit.


 but the orignal image is 9 pixel with 8 bit hence:-
 orignal image :- 9*8= 72bit
 compressed image:- 6 *9=54 bit.
39
40

You might also like