Assignment-1 Digital Image Processing
Assignment-1 Digital Image Processing
Ans 1. Digital Image Processing (DIP) is a software which is used to manipulate the digital
images using computer system. It is also used to enhance the images, to get some important
information from it.
It is also used in the conversion of signals from an image sensor into the digital images.
o Digital Image Processing is a software which is used in image processing. For example:
computer graphics, signals, photography, camera mechanism, pixels, etc.
o Digital Image Processing provides a platform to perform various operations like image
enhancing, processing of analog and digital signals, image signals, voice signals etc.
o It provides images in different formats.
Almost in every field, digital image processing puts a live effect on things and is growing with
time to time and with new technologies.
It refers to the process in which we can modify the look and feel of an image. It basically
manipulates the images and achieves the desired output. It includes conversion, sharpening,
blurring, detecting edges, retrieval, and recognition of images.
2) Medical Field
There are several applications under medical field which depends on the functioning of digital
image processing.
o Gamma-ray imaging
o PET scan
o X-Ray Imaging
o Medical CT scan
o UV imaging
3) Robot vision
There are several robotic machines which work on the digital image processing. Through image
processing technique robot finds their ways, for example, hurdle detection root and line follower
robot.
4) Pattern recognition
It involves the study of image processing, it is also combined with artificial intelligence such that
computer-aided diagnosis, handwriting recognition and images recognition can be easily
implemented. Now a days, image processing is used for pattern recognition.
5) Video processing
It is also one of the applications of digital image processing. A collection of frames or pictures
are arranged in such a way that it makes the fast movement of pictures. It involves frame rate
conversion, motion detection, reduction of noise and colour space conversion etc.
Ans2. Following are Fundamental Steps of Digital Image Processing:
1. Image Acquisition
Image acquisition is the first step of the fundamental steps of DIP. In this stage, an image is
given in the digital form. Generally, in this stage, pre-processing such as scaling is done.
2. Image Enhancement
Image enhancement is the simplest and most attractive area of DIP. In this stage details which
are not known, or we can say that interesting features of an image is highlighted. Such as
brightness, contrast, etc...
3. Image Restoration
Color image processing is a famous area because it has increased the use of digital images on the
internet. This includes color modeling, processing in a digital domain, etc....
5. Wavelets and Multi-Resolution Processing
In this stage, an image is represented in various degrees of resolution. Image is divided into
smaller regions for data compression and for the pyramidal representation.
6. Compression
Compression is a technique which is used for reducing the requirement of storing an image. It is
a very important stage because it is very necessary to compress data for internet use.
7. Morphological Processing
This stage deals with tools which are used for extracting the components of the image, which is
useful in the representation and description of shape.
8. Segmentation
In this stage, an image is a partitioned into its objects. Segmentation is the most difficult tasks in
DIP. It is a process which takes a lot of time for the successful solution of imaging problems
which requires objects to identify individually.
Representation and description follow the output of the segmentation stage. The output is a raw
pixel data which has all points of the region itself. To transform the raw data, representation is
the only solution. Whereas description is used for extracting information's to differentiate one
class of objects from another.
In this stage, the label is assigned to the object, which is based on descriptors.
Knowledge is the last stage in DIP. In this stage, important information of the image is located,
which limits the searching processes. The knowledge base is very complex when the image
database has a high-resolution satellite.
Ans3. A. Image Digitization
Text and images can be digitized similarly: a scanner captures an image (which may be an image
of text) and converts it to an image file, such as a bitmap . An optical character recognition
( OCR ) program analyzes a text image for light and dark areas in order to identify each
alphabetic letter or numeric digit, and converts each character into an ASCII code.
There are three types of connectivity on the basis of adjacency. They are:
a) 4-connectivity: Two or more pixels are said to be 4-connected if they are 4-adjacent with each
others.
b) 8-connectivity: Two or more pixels are said to be 8-connected if they are 8-adjacent with each
others.
c) m-connectivity: Two or more pixels are said to be m-connected if they are m-adjacent with
each others.
All Image Processing Techniques focused on gray level transformation as it operates directly on
pixels. The gray level image involves 256 levels of gray and in a histogram, horizontal axis spans
from 0 to 255, and the vertical axis depends on the number of pixels in the image.
s = T * r
Histogram Processing
In digital image processing, the histogram is used for graphical representation of a digital image.
A graph is a plot by the number of pixels for each tonal value. Nowadays, image histogram is
present in digital cameras. Photographers use them to see the distribution of tones captured.
In a graph, the horizontal axis of the graph is used to represent tonal variations whereas the
vertical axis is used to represent the number of pixels in that particular pixel. Black and dark
areas are represented in the left side of the horizontal axis, medium grey color is represented in
the middle, and the vertical axis represents the size of the area.
Image Averaging
This is based on the assumption that noise present in the image is purely random(uncorrelated)
and thus has zero average value. So, if we average n noisy images of same source, the noise will
cancel out and what we get is approximately the original image.
Applicability Conditions: Images should be taken under identical conditions with same camera
settings like in the field of astronomy.
Image Subtraction
This is mainly used to enhance the difference between images. Used for background subtraction
for detecting moving objects, in medical science for detecting blockage in the veins etc a field
known as mask mode radiography. In this, we take 2 images, one before injecting a contrast
medium and other after injecting. Then we subtract these 2 images to know how that medium
propagated, is there any blockage or not.
Image Multiplication
This can be used to extract Region of interest (ROI) from an image. We simply create a mask
and multiply the image with the mask to get the area of interest. Other applications can be
shading correction which we will discuss in detail in the next blogs.
Spectral enhancement relies on changing the gray scale representation of pixels to give an image
with more contrast for interpretation. It applies the same spectral transformation to all pixels with
a given gray scale in an image. However, it does not take full advantage of human recognition
capabilities even though it may allow better interpretation of an image by a user, • Interpretation
of an image includes the use of brightness information, and the identification of features in the
image. • Several examples will demonstrate the value of spatial characteristics in image
interpretation. • Spatial enhancement is the mathematical processing of image pixel data to
emphasize spatial relationships. This process defines homogeneous regions based on linear
edges. • Spatial enhancement techniques use the concept of spatial frequency within an image.
Spatial frequency is the manner in which gray-scale values change relative to their neighbors
within an image. If there is a slowly varying change in gray scale in an image from one side of
the image to the other, the image is said to have a low spatial frequency. If pixel values vary
radically for adjacent pixels in an image, the image is said to have a high spatial frequency.