0% found this document useful (0 votes)
131 views8 pages

Image Processing Mainly Include The Following Steps:: IPPR Unit-1

Digital image processing involves using computer algorithms to enhance or extract information from digital images. The main steps are importing an image, analyzing and manipulating it, and outputting a processed image or report. An image is defined as a two-dimensional array of pixels, each with an intensity value. Common image types include binary, black and white, grayscale, and color images represented using formats like RGB. Digital image processing involves steps like acquisition, enhancement, restoration, compression and recognition to process and analyze images.

Uploaded by

Tanmay Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views8 pages

Image Processing Mainly Include The Following Steps:: IPPR Unit-1

Digital image processing involves using computer algorithms to enhance or extract information from digital images. The main steps are importing an image, analyzing and manipulating it, and outputting a processed image or report. An image is defined as a two-dimensional array of pixels, each with an intensity value. Common image types include binary, black and white, grayscale, and color images represented using formats like RGB. Digital image processing involves steps like acquisition, enhancement, restoration, compression and recognition to process and analyze images.

Uploaded by

Tanmay Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

IPPR Unit-1

Digital Image Processing means processing digital image by means of a digital


computer. We can also say that it is a use of computer algorithms, in order to get
enhanced image either to extract some useful information.

Image processing mainly include the following steps:

1.Importing the image via image acquisition tools;


2.Analysing and manipulating the image;
3.Output in which result can be altered image or a report which is based on analysing
that image.

What is an image?

An image is defined as a two-dimensional function,F(x,y), where x and y are spatial


coordinates, and the amplitude of F at any pair of coordinates (x,y) is called
the intensity of that image at that point. When x,y, and amplitude values of F are finite,
we call it a digital image.
In other words, an image can be defined by a two-dimensional array specifically
arranged in rows and columns.
Digital Image is composed of a finite number of elements, each of which elements have
a particular value at a particular location.These elements are referred to as picture
elements,image elements,and pixels.A Pixel is most widely used to denote the elements
of a Digital Image.

Types of an image

1. BINARY IMAGE– The binary image as its name suggests, contain only two pixel
elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also
known as Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only black and white
color is called BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format.It has 256 different
shades of colors in it and commonly known as Grayscale Image. In this format, 0
stands for Black, and 255 stands for white, and 127 stands for gray.
4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different
colors in it.It is also known as High Color Format. In this format the distribution of
color is not as same as Grayscale image.
A 16 bit format is actually divided into three further formats which are Red, Green and
Blue. That famous RGB format.

Image as a Matrix
As we know, images are represented in rows and columns we have the following syntax
in which images are represented:

The right side of this equation is digital image by defition. Every element of this matrix is
called image element , picture element , or pixel.

DIGITAL IMAGE REPRESENTATION IN MATLAB:

In MATLAB the start index is from 1 instead of 0. Therefore, f(1,1) = f(0,0).


henceforth the two representation of image are identical, except for the shift in origin.
In MATLAB, matrices are stored in a variable i.e X,x,input_image , and so on. The
variables must be a letter as same as other programing languages.

PHASES OF IMAGE PROCESSING:

1.ACQUISITION– It could be as simple as being given an image which is in digital form.


The main work involves:
a) Scaling
b) Color conversion(RGB to Gray or vice-versa)
2.IMAGE ENHANCEMENT– It is amongst the simplest and most appealing in areas of
Image Processing it is also used to extract some hidden details from an image and is
subjective.
3.IMAGE RESTORATION– It also deals with appealing of an image but it is
objective(Restoration is based on mathematical or probabilistic model or image
degradation).
4.COLOR IMAGE PROCESSING– It deals with pseudocolor and full color image
processing color models are applicable to digital image processing.
5.WAVELETS AND MULTI-RESOLUTION PROCESSING– It is foundation of
representing images in various degrees.
6.IMAGE COMPRESSION-It involves in developing some functions to perform this
operation. It mainly deals with image size or resolution.
7.MORPHOLOGICAL PROCESSING-It deals with tools for extracting image
components that are useful in the representation & description of shape.
8.SEGMENTATION PROCEDURE-It includes partitioning an image into its constituent
parts or objects. Autonomous segmentation is the most difficult task in Image
Processing.
9.REPRESENTATION & DESCRIPTION-It follows output of segmentation stage,
choosing a representation is only the part of solution for transforming raw data into
processed data.
10.OBJECT DETECTION AND RECOGNITION-It is a process that assigns a label to
an object based on its descriptor.

OVERLAPPING FIELDS WITH IMAGE PROCESSING

According to block 1,if input is an image and we get out image as a output, then it is
termed as Digital Image Processing.
According to block 2,if input is an image and we get some kind of information or
description as a output, then it is termed as Computer Vision.
According to block 3,if input is some description or code and we get image as an
output, then it is termed as Computer Graphics.
According to block 4,if input is description or some keywords or some code and we
get description or some keywords as a output,then it is termed as Artificial Intelligence
Image acquisition is the first process shown in the figure. Image acquisition is converting an image to
digitalized form. However, the acquisition could be as simple as being given an image that is already in
digital form. Generally, the image acquisition stage involves pre-processing, such as scaling.

Image Enhancement is among the simplest and most appealing areas of digital image processing. The
idea behind enhancement techniques is to bring out details that are obscured or simply to highlight certain
features of interest in an image. A familiar example of enhancement is when we increase the contrast of
an image because it looks better.

Image restoration is an area that also deals with improving the appearance of an image. However, unlike
enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques
tend to be based on mathematical or probabilistic models of image degradation.
Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a
good enhancement result.

Colour image processing is an area that has been gaining in importance because of the significant
increase in the use of digital images over the Internet.

Wavelets are the foundations for representing images in various degrees of resolution.

Compression, as the name implies, deals with techniques for reducing the storage required to save an
image, or the bandwidth required to transmit it. Although storage technology has improved significantly
over the past decade, the same cannot be said for transmission capacity. Image compression is familiar to
most users in the form of image file extension such as .jpg file extension uses the JPEG (Joint
Photographic Experts Group) image compression standard.

Morphological processing deals with tools for extracting image components that are useful in
representation and description of the shape.

Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous
segmentation is one of the most difficult tasks in digital image processing.

Representation and description almost always follow the output of a segmentation stage, which usually is
raw pixel data, constituting either the boundary of a region (i.e. the set of pixels separating one image
region from another).

Recognition is the process that assigns a label (e.g. "vehicle") to an object based on its descriptors.

Concepts of an Image Processing System


Numerous models of image processing systems being sold throughout the work were rather
substantial peripheral devices that attached to an equally substantial host computer. Late in the
1980s and early in the 1990s, the market shifted to image processing hardware in the form of
single boards designed to be compatible with industry standard buses and to fit into engineering
workstation cabinets and personal computers. The figure below shows the basic components
comprising a typical general purpose system used for digital image processing. The function of
each component is discussed below starting with Image Sensing. With reference to Image
Sensing, two elements are required to acquire digital images. The first is a physical device that is
sensitive to the energy radiated by the object we wish to capture as an image. The second, called
a digitizer, is a device for converting the output of the physical sensing device into digital form.
For instance, in a digital video camera, the sensors produce an electrical output proportional to
light intensity. The digitizer converts these outputs to digital data.

The computer in an image processing system is a general-purpose computer and can range from
a PC to a supercomputer.
Software for image processing consists of specialized modules that perform specific tasks. A
well-designed package also includes the capability for the user to write code that, as a minimum,
utilizes the specialized modules.
Mass Storage capability is a must in image processing applications. An image of size 1024 x
1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of
storage space if the image is not compressed. When dealing with thousands or even millions, of
images, providing adequate storage in an image processing system can be a challenge.
Image displays in use today are mainly colour TV monitors. In some cases, it is necessary to
have stereo displays and these are implemented in the form of headgear containing two small
displays embedded in goggles worn by the user.
Hardcopy devices for recording images include laser printers, film cameras, heat-sensitive
devices, inkjet units and digital units such as optical and CD ROM disks.
Networking is almost a default function in any computer system in use today. Because of the
large amount of data inherent in image processing applications, the key consideration in image
transmission is bandwidth.

2. Sampling and quantization


In order to become suitable for digital processing, an image function f(x,y) must be digitized both
spatially and in amplitude. Typically, a frame grabber or digitizer is used to sample and quantize the
analogue video signal. Hence in order to create an image which is digital, we need to covert
continuous data into digital form. There are two steps in which it is done:

 Sampling
 Quantization

The sampling rate determines the spatial resolution of the digitized image, while the quantization
level determines the number of grey levels in the digitized image. A magnitude of the sampled image
is expressed as a digital value in image processing. The transition between continuous values of the
image function and its digital equivalent is called quantization.
The number of quantization levels should be high enough for human perception of fine shading
details in the image. The occurrence of false contours is the main problem in image which has been
quantized with insufficient brightness levels. 

An image may be continuous w.r.t x and y co-ordinate and also in amplitude. To convert it to
digital form, we have to sample the function in both co-ordinate and amplitude.
Sampling :

 The process of digitizing the co-ordinate values is called Sampling.


 A continuous image f(x, y) is normally approximated by equally spaced samples arranged
in the form of a NxM array where each elements of the array is a discrete quantity.

 The sampling rate of digitizer determines the spatial resolution of digitized image.
 Finer the sampling (i.e. increasing M and N), the better the approximation of continuous
image function f(x, y).

Quantization :

 The process of digitizing the amplitude values is called Quantization.


 Magnitude of sampled image is expressed as the digital values in Image processing.
 No of quantization levels should be high enough for human perception of the fine details
in the image.
 Most digital IP devices uses quantization into k equal intervals.
 If b-bits are used,

No. of quantization levels = k = 2b2b

 8 bits/pixels are used commonly.

TUTORIALSPOINT(SAMPLING AND QUANTIZZTION)

http://www.cse.iitm.ac.in/~vplab/courses/CV_DIP/PDF/NEIGH_CONN.pdf

You might also like