Chapter 10 Compression

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 66

Compression

Smash the information to bits

Overview

The primary goal of image compression is to minimize the memory footprint of image data so that storage and transmission times are minimized.

Storage capacity can be limited, as is the case with digital cameras Storage can be costly, as is the case when creating large warehouses of image data. Transmission of image data is also a central concern in many image processing systems. Recent studies of web use, for example, have estimated that images and video account for approximately 85% of all Internet traffic. Reducing the memory footprint of image data will correspondingly reduce Internet bandwidth consumption. More importantly, however, since most web documents contain image data it is vital that the image data be transferred over the network within a reasonable time frame.

Overview

Image compression works by identifying and eliminating redundant, or duplicated, data from a source image. There are three main sources of redundancy in image compression.
1. 2.

3.

Interpixel redundancy: pixels that are in close proximity within an image are generally related to each other. Psycho-visual redundancy: Since the human visual system does not perceive all visible information with equal sensitivity we understand that some visual information is less important than others. Image compression systems may simply eliminate information that is deemed to be unimportant in terms of human perception. Coding redundancy: If image samples are stored in such a way that more bits are required than necessary, then there is redundancy in the encoding scheme.

Overview

An image encoder compresses image data. An image decoder decompresses image data. An image encoder generally consists of three primary components, each of which seeks to address one of the three source of redundancy.

The mapper transforms an image into a form, or representation, such that interpixel redundancy is either eliminated or reduced. Some compression techniques convert spatial domain information into the frequency domain, for example, and this transformation is considered to be part of the mapper. The quantizer changes the information produced by the mapper into a discrete set of values and may even truncate data such that some information is lost in the process. The quantizer typically eliminates or reduces psycho-visual redundancy. Symbol encoding is then performed on the resulting data stream in order to reduce coding redundancy..

Overview

Common compression formats (and their corresponding techniques) includes PNG, JPEG and GIF. A compression technique can be characterized by

The computational complexity of the encoder The computational complexity of the decoder The compression ratio The visual quality of the compressed image

Computational complexity will not be considered in this text. We will focus on the compression ratio and fidelity.

Compression Ratio

Compression ratio serves as the primary measure of a compression techniques effectiveness. It is a measure of the number of bits that can be eliminated from an uncompressed representation of a source image.

Let N1 be the total number of bits required to store an uncompressed (raw) source image and let N2 be the total number of bits required to store the compressed data. The compression ratio Cr is then defined as the ratio of N1 to N2 Larger compression ratios indicate more effective compression Smaller compression ratios indicate less effective compression Compression ratios less than one indicate that the compressed representation is actually larger than the uncompressed representation.

Savings Ratio

The savings ratio is related to the compression ratio and is a measure of the amount of redundancy between two representations. The savings ratio is a percentage of how much data in the original image was eliminated to obtain the compressed image.

If, for example, a 5 Megabyte image is compressed into a 1 Megabyte image, the savings ratio is defined as (5-1)/5 or 4/5 or 80%. This ratio indicates that 80% of the uncompressed data has been eliminated in the compressed encoding.

Higher ratios indicate more effective compression while negative ratios are possible and indicate that the compressed image exceeds the memory footprint of the original.

Fidelity

Root mean squared (RMS) error is a generally accepted way of measuring the quality of a compressed image as compared with the uncompressed original. RMS error is a measure of the difference between two same-sized images and is not related to the memory footprint of an image. Assume that a WxH image I having B bands is compressed into image I. The root mean square error is then given by

Fidelity

The RMS error is a measure of the average sample error between two images. This can be seen by recognizing that the total number of samples in the image, WxHxB, occurs in the denominator and that the numerator sums the squares of the errors between every pair of corresponding samples in the two images. Since RMS is a measure of error

small RMS measures indicate high fidelity Large RMS values indicate low fidelity

Lossy vs. Lossless

An image compression technique can be broadly classified as either lossy or lossless.


A lossless technique is one which always produces a compressed image with an RMS error of 0 relative to the source. A lossy technique generates a compressed image that is not identical to the source. Lossy techniques are typically able to achieve greater compression ratios by sacrificing the quality of the result.

Figure 10.2 gives an illustration of the typical tradeoffs between compression ratio and fidelity.

In this example a 24-bit per pixel 513x668 image consumes 24x513x668 = 1,028, 054 bytes without any compression. JPEG compression reduces the memory footprint to 6,923 bytes but significantly reduces the image quality while PNG compression reduces the memory footprint to 708,538 bytes without any reduction in quality. The JPEG technique achieved a compression ratio of 99.4% while the PNG approach delivered a 33.1% compression ratio. It should be noted that JPEG can be controlled so as to provide much higher quality results but with a corresponding loss of compression.

Run Length Coding


Run length encoding is a lossless encoding scheme in which a sequence of same-colored pixels is stored as a single value. Consider a binary image in which each pixel is represented by a single bit that is either 0 (black) or 1 (white). One row of the image may contain the 32-pixel sequence:

11111111110001111111111111111111

This row contains three runs: 10 whites followed by 3 blacks followed by 19 whites. This can be encoded by the three byte sequence {10, 3, 19} by assuming that the data begins with white runs. The original representation consumes 32 bits of memory while the run length-encoded representation consumes 24 bits of memory if we assume that 8-bit bytes are used for each run. What are the compression and the saving ratios?

Compression ratio of 4:3 Savings ratio of 25%.

13

Run Length Encoding Implementation

Listing 10.1 gives an algorithm for run length encoding binary images. The encoder assumes that the first run encoded on each row is composed of white pixels.

Most monochrome images are of black foreground objects on white background and the first run will almost always be comprised of white pixels.
If the first pixel on any row is black, the length of the first white run is zero. Most types of monochrome images (e.g., line drawings and scanned text), or palette-based iconic images are amenable to run length encoding while continuous tone images can not generally be effectively compressed with this technique.

RLE Implementation Issues

Consider using 8 bit integers for storing a single run

Each run is 8 bits. Is it possible to represent runs over 255 long such that they can be decoded?

Can encode a single run length as a multi-byte sequence.

Reserve the value 255 to mean this run is 255 pixels longer and the run is not over yet All other values B indicate that the run is B pixels longer and the run is over. Can the following run length encoded row be decoded under this scheme?

{255, 255, 255,18, 32, 255,12, 65}

Under this encoding scheme, how is a run of length 255 encoded?

RLE Implementation Issues

Consider the problem of encoding a grayscale or color image.

For an 8-bit grayscale image there are more than two possible sample values

A run must be encoded as a two-element pair where the first element is the value of the run and the second is the length of the run. When encoding a row profile containing the samples {100, 100, 103, 104, 110, 110, 110}, for example, the run length representation would be something like {(100,2), (103,1), (104,1), (110,3)}.

Since storage is required for the value of the run in addition to the length of the run, the effectiveness of run length encoding is reduced or negated. Run length encoding of grayscale images is also problematic since it is extremely unlikely that a grayscale image contains runs of any significant length.

Run length encoding of Grayscales

Since run length encoding a binary image is generally effective we choose to view a grayscale image as a collection of binary images; each of which can be encoded using the technique of Listing 10.1. Just as a single 24 bpp image can be split into three 8-bit grayscale images, an 8-bit grayscale sample can be split into eight 1-bit or binary bands. All grayscale and color images can be decomposed into a set of binary images by viewing each sample as eight individual bits that form eight separate bands.

RLE and Bit Planes

Figure 10.4 illustrates how a 4-bit grayscale image can be decomposed into four separate monochrome images by bit plane slicing.

A1 bit is rendered as a white pixel while a 0 bit is rendered as black.

A 3x2 4-bit grayscale image is shown in part (b) where the samples are displayed in binary notation.
The least significant bit of each sample forms the 0 th bit plane: P0 The most significant bit of each sample forms the 3 rd bit plane: P3 The bit planes are shown as the four binary images of parts (b) through (e).

RLE and Bit Planes

Notes on bit planes

The 0th bit plane consists of the least significant bits of each sample. These bit planes contain little information and are sometimes considered to be noise. The 7th bit plane consists of the most significant bits of each sample. These bits more accurately represent the structure of the grayscale source than the other planes. The 7th bit plane of an 8 bit grayscale image is equivalent to thresholding the source against 128.

RLE and Bit Planes

Notes on representation:

Compression ratio improves if there are more long runs than short runs. Consider two adjacent samples that have similar grayscale values of 127 and 128. In binary, these are given as 01111111 and 10000000. Each run on each bit plane is terminated by this small change in grayscale value. Can usually lengthen runs by using gray coding rather than binary-coded-decimal encoding. Gray codes ensure that two adjacent values differ by a single binary bit.

Implementation

All encoders/decoders must be a great deal of attention to low level issues involving good design and even bit-level representation. Lets design two abstract classes that serve to encode and decode an image. This design will closely mirror the Java framework.

22

ImageEncoder Notes

The primary method is the abstract encode which takes an image and an output stream and write the image to the stream in some format. Most encoders will first write a header. The header will usually contain

A magic word a short byte sequence that serves to indicate the file format. The getMagicWord method obtains the magic word for particular encoder implementations. Information about the encoded image. Perhaps the width, height, number of bands etc.. The writeHeader method saves this information in a simple format.

ImageDecoder Notes

The primary method is the abstract decode which takes an input stream and creates an image. Most decoders will first read a header.

Read the magic word to ensure that they can decode the data. This is a fast process and is given by the canDecode method.

Hierarchical Coding

In run length encoding each datum represents a one-dimensional run of pixels. Constant area coding (CAC) is a two-dimensional extension where entire rectangular regions of pixels are stored as a single unit. While run-length encoding is a lossless compression technique, constant area coding can be either lossy or lossless depending upon the specific implementation. Consider encoding a WxH binary image using a CAC technique.

If the entire image is white we simply output white and are done with the encoding. If the entire image is black we output black and are also done encoding. If, however, the image is neither all white nor all black

The image is divided into four equally sized regions by splitting the image in half both vertically and horizontally Each of these four regions is then recursively encoded following the same procedure. The process recurses only for non-homogeneous regions and will always terminate since at some point the region will be no larger than a single pixel which will be either all white or all black.

Hierarchical Coding

The division of an image region into four equal-sized subregions is often represented through a region quadtree. A region quadtree is a data structure that partitions twodimensional rectangular regions into subregions as described in the preceding paragraph. Each node of a quadtree contains either zero or four children and represents a rectangular region of two dimensional space.

Any node having no children is a leaf (or terminal) node Any node having four children is an internal node. Leaf nodes represent rectangular regions that are completely filled by a single color Internal nodes represent regions containing variously colored quadrants as described by their four children.

Quad Tree example

CAC Example

CAC Example

Consider encoding the quad tree from Figure 10.8

Must traverse each node and output some information Use a pre-order traversal Represent each node by a single bit:

0 denotes a region that contains at least 1 black pixel 1 denotes an all white region

This scheme generates: 0100110011010010010001111

CAC Implementation

When implementing a CAC encoder an image is first converted into its quadtree representation after which the tree is then serialized. Our implementation will implicitly construct the quadtree through a sequence of recursive function calls and hence there is no need for an explicit tree data type. Each function call represents a single tree node and the series of calls effectively traverses, in a pre-order fashion, the quadtree representation. We will also use the MemoryCacheImageOutputStream which enables bit-level writing to some OutputStream. This is a standard class in the Java distribution.

CAC Implementation

CAC Implementation

Hierarchical

Could CAC be used on

Gray-scale images? Color photos?

What kind of images would CAC be most useful for?


Scanned Text (lots of white space) Line/CAD drawings (lots of white space)

35

Image courtesy: http://www.flickr.com/photos/peterhess/2282904848/

Hierarchical(Revisited)

Can implement a lossy version of hierarchical coding to improve compression at the cost of image fidelity. Useful only for grayscale and color images.
Lossless:

If the region is all white, write a 1 bit and done Otherwise write a 0 bit and then

The block is divided into 4 quadrants Encode each quadrant [recursion will terminate if the region becomes 1x1]

Lossy:

If region samples are sufficiently similar, write the average sample value and done Otherwise write a 0 byte and then

The block is divided into 4 quadrants Encode each quadrant [recursion will terminate if the region becomes 1x1]

36

Lossy Hierarchical encoding


A regions color is reduced to a single value which is the average of the original source samples. Sufficiently similar usually means the standard deviation is small enough

Set a threshold and if the standard deviation falls below the threshold, the samples in the region are sufficiently similar. The threshold becomes a quality setting

37

higher thresholds correspond to lower quality lower thresholds correspond to higher quality

Compression Example

Source

threshold = 10

threshold = 20

38

Compression Example

39

Numeric Example

Use a standard deviation threshold of 5.0.

40

Predictive Coding

Addresses interpixel redundancy Neighboring samples are usually similar Predicts the Nth sample of a scan by examining the previous K samples Encodes the difference between the predicted value and the actual value

41

Lossless Predictive Coding


In the simplest case, the predicted value is the last value seen Given a stream of N samples:

S0, S1, S2, , Sn-1, Sn

Predict that the first sample S0 is 128 Predict that the Sk sample is Sk-1.

42

Lossless Predictive Encoding

Consider encoding the error (or difference) stream.

The error may range between -255 and 255 The bit depth has increased from 8 to 9 bits! An un compression!

The key benefit is the distribution of data in the error stream

43

Lossy Predictive Coding


Reduce the accuracy of the saved image for increased compression One of the simplest lossy predictive coding schemes is known as Delta Modulation. Record the approximate error between the predicted and actual samples values. For a sequence of samples S that are indexed by k to generate an approximate sequence of samples S by using a fixed delta value, the approximate error is given by

44

Delta Modulation Example

45

Effect of delta value

46

Delta Modulation Example

47

Delta Modulation Example

48

JPEG case study

JPEG is a file format developed by the Joint Photographic Experts Group. JPEG is a broad standard

Supports both lossy and lossless compression The most common implementation is JFIF. Most often people mean JFIF when the use the term JPEG. JFIF is based on the DCT coefficients of an image This presentation will describe JFIF

JPEG 2000 is a newer standard based on wavelets

49

JPEG/JFIF overview

50

JPEG/JFIF overview

Convert to YCbCr color space and subsample in chroma Tile into 8x8 blocks, offset by -128 and take the DCT Quantize the coefficients. Divide by small numbers in the upper-left of each tile and large numbers in the lower-right. Encode the resulting quantized coefficients losslessly

Delta coding of DC components RLE for the AC components Huffman compress each band independently with (possibly different) huffman tables

The only lossy portion of this process is subsampling and quantization. 51

JPEG example
(quality increases to the right)

52

Definitions

Steganography

the art of concealing the existence of information within seemingly innocuous carriers an encrypted message may draw suspicion while a hidden message will not

Neil Johnson

Image Processing and Steganography

the art of concealing the existence of information within seemingly innocuous digital images

53

The Big Idea


(LSB embedding)

An image contains more information than can be perceived. Replace the imperceptible bits of a cover image with the bits of a secret message.
The Challenge: Write Java code to LSB embed a secret message in an image. The secret message can be any object.

54

Java Framework

Design a set of classes to perform LSB embedding Isolate the task of embedding/extracting
<<interface>> Steganographer +embed(cover:BufferedImage, msg:Object):BufferedImage +extract(stego:BufferedImage):Object

LSBSteganographer

+embed(cover:BufferedImage, msg:Object):BufferedImage +extract(stego:BufferedImage):Object +getChannelCapacity():int +setChannelCapacity(channels:int)

55

Steganography

algorithm embedData(Image image, int[] DATA) Input: A gray-scale image and array of integer-valued data (text or image or whatever) Output: A gray-scale image with DATA contained in the least significant bit plane
// NOTE that the DATA probably has some HEADER information // Actual implementation would not have the interface designed this way for every pixel Pk in the image replace the LSB of pixel Pk with the Kth bit of DATA

56

Numeric Example

Consider the following image (shown in decimal and binary notation)

Embed the text HI

57

Two one-byte characters which in ASCII are 72 (H) and 73 (I) These number in binary are {01001000,01001001}.

Numeric Example

Before and after embedding {01001000,01001001}.

Consider

How many bits have actually been changed? Are these changes perceptible changes?

58

Class design

Isolate the task of writing/reading bits. The key is to integrate into Javas existing IO framework
ImageOutputStream extends OutputStream

+ImageOutputStream(destination:BufferedImage, numChannels:int) +hasMoreBits():boolean +writeBit(int b) +writeByte(int b) +write(int b)

ImageInputStream extends InputStream +ImageInputStream(source:BufferedImage, numChannels:int) +hasMoreBits():boolean +getNextBit():int +getNextByte():int +read():int

59

Implement the ImageOutputStream

Similar to scanner but is an output stream


class ImageOutpuStream extends OutputStream { private BufferedImage buffer; private int row, col, band, channel, numChannels; // The least significant bit of b is written to the buffer public void writeBit(int b) throws IOException { if(!hasMoreBits()) throw new IOException(Out of capacity); int newPixel = buffer.getRaster().getSample(col,row,band); newPixel = setBit(newPixel, b, channel); buffer.getRaster().setSample(col,row,band,newPixel); advanceIndices(); } } }

The writeByte and write methods are then built on the writeBit.
60

Embedding

Now that we can write data to an image, we must use that to embed data. Humor me. Assume that we are only writing an image within an image.
class LSBSteganographer implements Steganographer { private int numberOfChannels; public Image embed(BufferedImage cover, Object message) { Image result = cover.copy(); int k = getNumberOfChannels(); OutputStream rOut = new ImageOutputStream(result, k); writeHeaderInformation(message, rout); for every pixel P in message rOut.writeByte(P); return result; } }

61

Embedding

We already know how to write an image to an output stream. Use the ImageIO class!
class LSBSteganographer implements Steganographer { private int numberOfChannels; public Image embed(BufferedImage cover, Object message) { Image result = cover.copy(); int k = getNumberOfChannels(); OutputStream rOut = new ImageOutputStream(result, k); ImageIO.write((BufferedImage)message, PNG, rOut); return result; } }

What if the message is not an image? Can we embed it in the cover?

62

Embedding

Assume that the message is not an image. Can we embed it?


class LSBSteganographer implements Steganographer { private int numberOfChannels; public Image embed(BufferedImage cover, Object message) { Image result = cover.copy(); int k = getNumberOfChannels(); OutputStream rOut = new ImageOutputStream(result, k); DataOutputStream fout = new DataOutputStream(rOut); fout.writeObject(message); return result; } }

The DataOutputStream will encode any Serializable object. Unfortunately, BufferedImage is not Serializable.
63

Embedding

Assume that the message is not an image. Can we embedLSBSteganographer implements Steganographer { it? class
private int numberOfChannels;
public Image embed(BufferedImage cover, Object message) { Image result = cover.copy(); int k = getNumberOfChannels(); OutputStream rOut = new ImageOutputStream(result, k); if(message instanceof Serializable) { DataOutputStream fout = new DataOutputStream(rOut); fout.writeObject(message); } else if(message instanceof BufferedImage) { ImageIO.write(message, PNG, rout); } else throw new IllegalArgumentException(); return result; } }

Will now encode anything if it is either Serializable or an image.

64

Example of Multiple Embeddings

Children's children are a crown to the aged, and parents are the pride of their children. Author: Solomon, King of Israel as recorded in the book of Proverbs chapter 17 verse 6.

65

Other embedding techniques and uses

Embedding data can be done in the frequency domain.

Consider JPEG encoding based on the DCT coefficients. Manipulation of the DCT coefficients would

Spread the embedded data across the entire image (or block). Would be difficult to detect and difficult to thwart Could embed patterns that are detectible but hard to identify and hence erase

Could be used for copyright protection


Identify and protect copyrighted work Techniques could apply to audio, video, and image data

66

You might also like