Fundamental Steps of Digital Image Processing
Fundamental Steps of Digital Image Processing
Image
Representation &
Enhancement Knowledge Base Description
Problem
Domain Image Acquisition
Object
Recognition
a. Image Acquisition: It is the first step of image processing. In this stage, an image is given in
the digital form. Pre-processing such as scaling is done.
b. Image Enhancement: In this stage details or features of an image is highlighted. Such as
brightness, contrast, etc.
c. Image Restoration: In this stage, the appearance of an image is improved.
d. Colour Image Processing: This includes colour modelling, processing in a digital domain;
e. Wavelets and Multi-Resolution Processing: In this stage, an image is represented in
various degrees of resolution. Image is divided into smaller regions for data compression and
for the pyramidal representation.
f. Compression: It is a technique, used for reducing the requirement of storing an image.
g. Morphological Processing: It deals with tools, used for extracting the components of the
image.
h. Segmentation: In this stage, an image is a partitioned into its objects.
i. Representation and Description: It follows the output of the segmentation stage. The
output is a raw pixel data which has all points of the region itself. To transform the raw data,
representation is the only solution. Whereas description is used for extracting information's
to differentiate one class of objects from another.
j. Object recognition: The label is assigned to the object, which is based on descriptors.
k. Knowledge Base: Important information of the image is located.
2
3. What is an image?
An image is defined as a two-dimensional function F (x, y), where x and y are spatial
coordinates, and the amplitude of F at any pair of coordinates (x, y) is called the intensity of that
image at that point. When x, y, and amplitude values of F are finite, we call it a digital image.
For the top pair of points (x1, y2) and (x2, y2), calculate the interpolated value R2:
Now perform linear interpolation in the y-direction using the interpolated values R1 and R2:
The result, f(x, y), is the bilinearly interpolated
value of the function at the point (x, y).
4
Sampling: Since an analogue image is continuous not just in its co-ordinates (x axis), but also
in its amplitude (y axis), so the part that deals with the digitizing of co-ordinates is known as
sampling. In digitizing sampling is done on independent variable. In case of equation y = sin(x),
it is done on x variable.
Quantization: Quantization is opposite to sampling because it is done on “y axis” while
sampling is done on “x axis”. Quantization is a process of transforming a real valued sampled
image to one taking only a finite number of distinct values. Under quantization process the
amplitude values of the image are digitized.
9. What is 8-bit colour image? For what purpose could it be used?
An 8-bit colour image is a digital image in which each pixel is represented by an 8-bit value,
allowing for 256 possible colours. The pixel value acts as an index into a colour palette or
lookup table, where each index maps to a specific colour defined by an RGB (red, green, blue)
triplet.
Purpose: An 8-bit colour image is used for purposes where memory and bandwidth efficiency
are important, such as in older computer systems, web graphics (e.g., GIF format), icons, and
sprites. It is suitable for images that do not require a wide range of colours, making it ideal for
simple graphics, diagrams, and certain types of animations.
10. What is weber ratio? Show the variation of weber ratio.
In image processing, the Weber ratio is used to model and understand human visual perception
of contrast and brightness differences. It is a measure of the smallest detectable change in
luminance (brightness) relative to the background luminance. Weber Ratio = Δ𝐿/𝐿
Weber ratio varies depending on background luminance:
- At low luminance levels (scotopic vision), it can be higher (around 0.03 - 0.05).
- In moderate light (mesopic vision), it's intermediate.
- In bright light (photopic vision), it tends to be lower (around 0.01 - 0.02).
It reflects how sensitive humans are to changes in brightness under different lighting conditions.
5
20. Write the conversion rules for converting RGB colour model to
HSI colour model and vice-versa.
9
Each pixel in the image space is then mapped to a curve in the parameter space that represents
all the possible lines that could pass through that pixel. The curves in the parameter space are
then analysed to detect the presence of lines in the image.
Here, the sequence of data is 𝑥(𝑛), with 𝑛=0, 1, 2, …, N. X(k) represents the kth transform.
Applications:
Some of the applications of DCT are as follows:
• Image compression algorithms like JPEG and HEIF.
• Audio file formats like MP3, AAC, and more.
• Video file formats like MPEG.
• Scientists and Engineers also use them for digital signal processing, telecommunications,
and more.
10
24. What are image negatives? What is the equation for getting a
negative image?
The negative of an image with gray levels in the range [0, L-1] is obtained by using the negative
transformation, which is given by the expression S = L – 1 – r, where S is the output pixel and r
is the input pixel.
The equation for getting a negative image is S = L – 1 – r.
25. What is spatial domain representation?
An image can be represented in the form of a 2D matrix where each element of the matrix
represents pixel intensity. This state of 2D matrices that depict the intensity distribution of an
image is called Spatial Domain.
11
Q=c
for i<-1 to n-1
do
{
40. State the JPEG compression algorithm and draw the schematic
diagram of JPEG compressor.
JPEG stands for Joint Photographic Experts Group. We perform such type of compression to
reduce the size of the file without damaging its quality.