Image Processing Mainly Include The Following Steps:: IPPR Unit-1
Image Processing Mainly Include The Following Steps:: IPPR Unit-1
What is an image?
Types of an image
1. BINARY IMAGE– The binary image as its name suggests, contain only two pixel
elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also
known as Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only black and white
color is called BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format.It has 256 different
shades of colors in it and commonly known as Grayscale Image. In this format, 0
stands for Black, and 255 stands for white, and 127 stands for gray.
4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different
colors in it.It is also known as High Color Format. In this format the distribution of
color is not as same as Grayscale image.
A 16 bit format is actually divided into three further formats which are Red, Green and
Blue. That famous RGB format.
Image as a Matrix
As we know, images are represented in rows and columns we have the following syntax
in which images are represented:
The right side of this equation is digital image by defition. Every element of this matrix is
called image element , picture element , or pixel.
According to block 1,if input is an image and we get out image as a output, then it is
termed as Digital Image Processing.
According to block 2,if input is an image and we get some kind of information or
description as a output, then it is termed as Computer Vision.
According to block 3,if input is some description or code and we get image as an
output, then it is termed as Computer Graphics.
According to block 4,if input is description or some keywords or some code and we
get description or some keywords as a output,then it is termed as Artificial Intelligence
Image acquisition is the first process shown in the figure. Image acquisition is converting an image to
digitalized form. However, the acquisition could be as simple as being given an image that is already in
digital form. Generally, the image acquisition stage involves pre-processing, such as scaling.
Image Enhancement is among the simplest and most appealing areas of digital image processing. The
idea behind enhancement techniques is to bring out details that are obscured or simply to highlight certain
features of interest in an image. A familiar example of enhancement is when we increase the contrast of
an image because it looks better.
Image restoration is an area that also deals with improving the appearance of an image. However, unlike
enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques
tend to be based on mathematical or probabilistic models of image degradation.
Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a
good enhancement result.
Colour image processing is an area that has been gaining in importance because of the significant
increase in the use of digital images over the Internet.
Compression, as the name implies, deals with techniques for reducing the storage required to save an
image, or the bandwidth required to transmit it. Although storage technology has improved significantly
over the past decade, the same cannot be said for transmission capacity. Image compression is familiar to
most users in the form of image file extension such as .jpg file extension uses the JPEG (Joint
Photographic Experts Group) image compression standard.
Morphological processing deals with tools for extracting image components that are useful in
representation and description of the shape.
Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous
segmentation is one of the most difficult tasks in digital image processing.
Representation and description almost always follow the output of a segmentation stage, which usually is
raw pixel data, constituting either the boundary of a region (i.e. the set of pixels separating one image
region from another).
Recognition is the process that assigns a label (e.g. "vehicle") to an object based on its descriptors.
The computer in an image processing system is a general-purpose computer and can range from
a PC to a supercomputer.
Software for image processing consists of specialized modules that perform specific tasks. A
well-designed package also includes the capability for the user to write code that, as a minimum,
utilizes the specialized modules.
Mass Storage capability is a must in image processing applications. An image of size 1024 x
1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of
storage space if the image is not compressed. When dealing with thousands or even millions, of
images, providing adequate storage in an image processing system can be a challenge.
Image displays in use today are mainly colour TV monitors. In some cases, it is necessary to
have stereo displays and these are implemented in the form of headgear containing two small
displays embedded in goggles worn by the user.
Hardcopy devices for recording images include laser printers, film cameras, heat-sensitive
devices, inkjet units and digital units such as optical and CD ROM disks.
Networking is almost a default function in any computer system in use today. Because of the
large amount of data inherent in image processing applications, the key consideration in image
transmission is bandwidth.
Sampling
Quantization
The sampling rate determines the spatial resolution of the digitized image, while the quantization
level determines the number of grey levels in the digitized image. A magnitude of the sampled image
is expressed as a digital value in image processing. The transition between continuous values of the
image function and its digital equivalent is called quantization.
The number of quantization levels should be high enough for human perception of fine shading
details in the image. The occurrence of false contours is the main problem in image which has been
quantized with insufficient brightness levels.
An image may be continuous w.r.t x and y co-ordinate and also in amplitude. To convert it to
digital form, we have to sample the function in both co-ordinate and amplitude.
Sampling :
The sampling rate of digitizer determines the spatial resolution of digitized image.
Finer the sampling (i.e. increasing M and N), the better the approximation of continuous
image function f(x, y).
Quantization :
http://www.cse.iitm.ac.in/~vplab/courses/CV_DIP/PDF/NEIGH_CONN.pdf