0% found this document useful (0 votes)
21 views10 pages

Report2 Dig

Uploaded by

ahmed.elragal02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views10 pages

Report2 Dig

Uploaded by

ahmed.elragal02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Ahmed Hassan Elraggal

Id: 7794

Report 2
1)Find the spectrum of Manchester code sin^2 :
2)Prove that: SQNR(db) = α + 6n:
The Signal-to-Quantization-Noise Ratio (SQNR) is a measure used in signal processing
to quantify the ratio of the power of a signal to the quantization noise introduced by
quantization error. In digital systems, when an analog signal is converted to digital form,
quantization noise is introduced due to the finite precision of the digital representation.

The SQNR in decibels (dB) is often given by the formula:

Where Psignal is the power of the signal and


Pnoise is the power of the quantization noise.

In many digital systems, the quantization noise power is given by:

Where Δ is the quantization step size. For a uniform quantizer with n bits, the
quantization step size Δ is given by:

Where Vmax and Vmin are the maximum and minimum values of the signal.

Given that the signal is represented using n bits, and assuming that the maximum signal
amplitude is A, then the power of the signal is approximately equals A^2/2

Now, let's denote α as a constant, representing the SNR in dB for an ideal quantizer
(infinite precision). Then, we can rewrite the SQNR in dB as:
3)Show that SQNR improved by 6db for each increase of
number of bits per one sample by 1 bit:
The Signal to Noise (SNR) Ratio for a PCM system is given by: SNR = 1.8 + 6n dB (where
'n' is the number of bits per sample)

For (n + 1) number of bits, the Signal to Noise Ratio will be:

SNR = 1.8 + 6n + 6 dB

So the SNR would increase by:

1.8 + 6n + 6 - (1.8 + 6n)

= 6 dB
Ahmed Hassan Elraggal
Id: 7794

Report 2: Image Compression


Need of Compression
Uncompressed images can occupy a large amount of memory in RAM and in storage media, and they
can take more time to transfer from one device to another. Table 1 below shows the comparative size
from normal text to high compressed image. Examples given in Table1clearly shows need for sufficient
storage space and more bandwidth because long transmission time is required for uncompressed
image. So there is only one solution is to compress the image.

Table 1 Different Uncompressed Images and its storage space

Principle of Compression Digital image is basically array of various pixel values. In the digital image
Pixels of neighborhood are correlated and so that this pixels contain redundant bits. By using the
compression algorithms redundant bits are removed from the image so that size image size is reduced
and the image is compressed. Image compression Have two main Components: redundancy reduction
and irrelevant data reduction redundancy reduction is achieved by removing extra bits or repeated bits.
While in irrelevant reduction the smallest or less important information is omitted, which will not
received by receiver. There are three types of redundancies. Coding redundancy is present when less
number of code words required instead of larger symbol. Inter pixel redundancy results from correlated
pixels of an image. In psycho visual redundancy data is ignored by the normal visual system. Image
compression is applied to reduce the number of bits which represent the image
PERFROMANCE PARAMETERS
There are two performance parameters are used to measure the performance of the image
compression algorithms. One is PSNR (peak signal to noise ratio) and second is Mean square error
(MSE). PSNR is the measurement of the peak error between the compressed image and original image.
The higher the PSNR contains better quality of image. To compute the PSNR first of all MSE (mean
square error) is computed. Mean Square Error (MSE) is the cumulative difference between the
compressed image and original image. Small amount of MSE reduce the error and improves image
quality

TYPES OF IMAGES
A. TIFF The TIFF (Tagged Image File Format) is a flexible format which can be used for lossless or
lossy Compression [4].In practice, TIFF is used as a lossless image storage format in which image
compression is not used. For web transmission TIFF files are not used because TIFF files require large
size.

B. GIF Graphics Interchange Format (GIF) is useful for images that have less than 256 colors,
grayscale.GIF is limited to an 8 bit or 256 colors. so that it can be used to store simple graphics ,logos
and cartoon style images. It uses loss less compression.

C. RAW file format includes images directly taken from Digital cameras. These formats normally
use loss less or lossy compression method and produce smaller size Images like TIFF. The Disadvantage
of RAW Image is that they are not standardized image and it will be different for different
manufactures. So these images require manufacture's software to view the images.
D. PNG The PNG (portable Network Graphics) file format supports 8 bit, 24 bit, 48 bit true color
with and without alpha channel. Lossless PNG format is best compare to lossy JPEG. Typically, an image
in a PNG file can be 10% to 30% more compressed than in a GIF format [5].PNG format have smaller
size and more colors compare to others.
E. JPEG Joint Photographic Expert Group (JPEG) is a lossy compression technique to store 24 bit
photographic images. It is widely accepted in multimedia and imaging industries. JPEG is 24 bit color
format so it have millions of colors and more superior compare to others[6].it is used for VGA(video
graphics Array) display.JPEG have lossy compression and it support 8 bit gray scale image and 24 bit
color images
F. JPEG2000 JPEG 2000 is a compression standard for lossless and lossy storage.JPEG2000
improves the JPEG format. it is nearly same as JPEG.
G. Exif The Exif (Exchangeable Image File Format) is similar to JFIF format with TIFF extension. it is
used to record and exchange of images with image metadata between the digital camera and editing
and viewing software.
H. WEBP WEBP is a new image format that use lossy image compression. It was designed by Google
to reduce image file size to increase the speed when web page is loading. It is based on VP8s infraframe
coding.
I. BMP The Bitmap (BMP) file format deal with graphic file related to Microsoft windows OS.
Normally these files are uncompressed so they are large. These files are used in basic windows
programming. [7] BMP images are binary files.BMP file does not support true colors.
COMPRESSION TECHNIQUES

There are two different ways to compress images, lossless and lossy compression. Lossless Image
Compression: A lossless technique means that the restored data file is identical to the original. This
type of compression technique is used where the loss of information is unacceptable. Here, subjective
as well as objective qualities are given importance. In a nutshell, decompressed image is exactly same
as the original image.

Lossy Image Compression: It is based on the concept that all real world measurements inherently
contain a certain amount of noise. If the changes made to these images, resemble a small amount of
additional noise, no harm is done. Compression techniques that allow this type of degradation are
called lossy. This distinction is important because lossy techniques are much more effective at
compression than lossless methods. The higher the compression ratio, the more noise added to the
data. In a nutshell, decompressed image is as close to the original as we wish.
Lossless compression technique is reversible in nature, whereas lossy technique is irreversible. This is
due to the fact that the encoder of lossy compression consists of quantization block in its encoding
procedure. JPEG JPEG (pronounced "jay-peg") is a standardized image compression mechanism. JPEG
also stands for Joint Photographic Experts Group, the original name of the committee that wrote the
standard. JPEG is designed for compressing full-color or gray-scale images of natural, realworld scenes.
It works well on photographs, naturalistic artwork, and similar material. There are lossless image
compression algorithms, but JPEG achieves much greater compression than with other lossless
methods. JPEG involves lossy compression through quantization that reduces the number of bits per
sample or entirely discards some of the samples. As a result of this procedure, the data file becomes
smaller at the expense of image quality. The usage of JPEG compression method is motivated because
of following reasons:-

a. The compression ratio of lossless methods is not high enough for image and video compression.
b. JPEG uses transform coding, it is largely based on the following observations: Observation 1: A
large majority of useful image contents change relatively slowly across images, i.e., it is unusual for
intensity values to alter up and down several times in a small area, for example, within an 8 x 8 image
block. Observation 2: Generally, lower spatial frequency components contain more information than
the high frequency components which often correspond to less useful details and noises. Thus, JPEG is
designed to exploit known limitations of the human eye, notably the fact that small color changes are
perceived less accurately than small changes in brightness. JPEG can vary the degree of lossiness by
adjusting compression parameters. Also JPEG decoders can trade off decoding speed against image
quality, by using fast but inaccurate approximations to the required calculations. Useful JPEG
compression ratios are typically in the range of about 10:1 to 20:1. Because of the mentioned plus
points, JPEG has become the practical standard for storing realistic still images using lossy compression.
JPEG (encoding) works as shown in the figure. The decoder works in the reverse direction. As
quantization block is irreversible in nature, therefore it is not included in the decoding phase.
A major drawback of JPEG (DCT-based) is that blocky artifacts appear at low bitrates in the
decompressed images Such artifacts are demonstrated as artificial discontinuities between adjacent
image blocks. An image illustrating such blocky artifacts is shown in the figure 6 below:-
This degradation is a result of a coarse quantization of DCT coefficients of each image block without
taking into account the inter-block correlations. The quantization of a single coefficient in a single block
causes the reconstructed image to differ from the original image by an error image proportional to the
associated basis function in that block. Measures that require both the original image and the distorted
image are called “full-reference” or “non-blind” methods, measures that do not require the original
image are called “noreference” or “blind” methods, and measures that require both the distorted
image and partial information about the original image are called “reduced-reference” methods. Image
quality can be significantly improved by decreasing the blocking artifacts. Increasing the bandwidth or
bit rate to obtain better quality images is often not possible or too costly. Several approaches to
improve the quality of the degraded images have been proposed in the literature. Techniques, which
do not require changes to existing standards, appear to offer the most practical solutions, and with the
fast increase of available computing power, more sophisticated methods, can be implemented. The
subject of this dissertation is to salvage some of the quality lost by image compression through the
reduction of these blocking artifacts.

CONCLUSION
Hence all basic image compression techniques have been discussed. To conclude all the image
compression techniques are useful in their related areas and every day new compression technique is
developing which gives better compression ratio. This review paper gives clear idea about basic
compression techniques and image types. Based on review of different types of images and its
Compression algorithms we conclude that the compression algorithm depends on the three factors:
quality of image, amount of compression and speed of compression.

REFERENCES
[1] Subramanya A. “Image Compression Technique,”potentials IEEE, Vol. 20, issue 1, pp19-23, Feb-
March 2001.
[2] Woods, R. C. 2008. Digital Image processing. New Delhi: Pearson Pentice Hall, Third Edition, Low
price edition, Pages 1-904.
[3] http://en.wikipedia.org/wiki/Image_file_formats
[4] Keshab K. Parhi, Takao Nishitan; “Digital Signal processing for multimedia systems”, ISBN 0-8247-
1924
[5] “Understanding Image Types”http://www.contentdm.com/USC/tutorial/i mage-filetypes.pdf.1997-
2005,DiMeMa,Inc,Unpublished.

You might also like