Unit 1 DIP

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Jai Shree Raam Jai Bajrang Bali

Unit 1 : DIP : Digital Image Fundamentals

What is Digital Image Processing?


Digital Image Processing Basics
Digital Image Processing means processing digital image by means of
a digital computer. We can also say that it is a use of computer
algorithms, in order to get enhanced image either to extract some
useful information.
Digital image processing is the use of algorithms and mathematical
models to process and analyze digital images. The goal of digital
image processing is to enhance the quality of images, extract
meaningful information from images, and automate image-based
tasks.
The basic steps involved in digital image processing are:
1. Image acquisition: This involves capturing an image using a
digital camera or scanner, or importing an existing image into a
computer.
2. Image enhancement: This involves improving the visual quality
of an image, such as increasing contrast, reducing noise, and
removing artifacts.
3. Image restoration: This involves removing degradation from an
image, such as blurring, noise, and distortion.
4. Image segmentation: This involves dividing an image into
regions or segments, each of which corresponds to a specific
object or feature in the image.
5. Image representation and description: This involves
representing an image in a way that can be analyzed and
manipulated by a computer, and describing the features of an
image in a compact and meaningful way.
6. Image analysis: This involves using algorithms and
mathematical models to extract information from an image,
such as recognizing objects, detecting patterns, and quantifying
features.
7. Image synthesis and compression: This involves generating new
images or compressing existing images to reduce storage and
transmission requirements.
8. Digital image processing is widely used in a variety of
applications, including medical imaging, remote sensing,
computer vision, and multimedia.

Image processing mainly include the following steps:


1.Importing the image via image acquisition tools;
2.Analysing and manipulating the image;
3.Output in which result can be altered image or a report which is
based on analysing that image.
What is an image?
An image is defined as a two-dimensional function,F(x,y), where x
and y are spatial coordinates, and the amplitude of F at any pair of
coordinates (x,y) is called the intensity of that image at that point.
When x,y, and amplitude values of F are finite, we call it a digital
image.
In other words, an image can be defined by a two-dimensional array
specifically arranged in rows and columns.
Digital Image is composed of a finite number of elements, each of
which elements have a particular value at a particular location.These
elements are referred to as picture elements,image elements,and
pixels.A Pixel is most widely used to denote the elements of a Digital
Image.
Types of an image
1. BINARY IMAGE– The binary image as its name suggests, contain
only two pixel elements i.e 0 & 1,where 0 refers to black and 1
refers to white. This image is also known as Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only
black and white color is called BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format.It
has 256 different shades of colors in it and commonly known as
Grayscale Image. In this format, 0 stands for Black, and 255
stands for white, and 127 stands for gray.
4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536
different colors in it.It is also known as High Color Format. In
this format the distribution of color is not as same as Grayscale
image.
A 16 bit format is actually divided into three further formats which
are Red, Green and Blue. That famous RGB format.

The Origins of Digital Image Processing:


One of the first application of digital image was in the newspaper
industry, when picture were first sent by submarine cable between
London and New York.Introducation of the Bart lane cable picture
transmission system in the early 1920s reduced the time required to
transport a picture across the Atlantic from more than a week to less
than three hours.Spaecialized printing equipment coded pictures for
cable transmission and then reconstructed them at the receiving end.
Some of the initial problems in improving the visual quality of these
early digital pictures were related to the selection of printing
producers and the distribution of intensity levels. In facts ,digital
images requires so much storages and computation power that
progress in the field of digital image processing has been dependent
on the development of digital computers and of supporting
technologies that include data storage, display and transmission. The
digital image is composed of a finite number of elements, each of
which has a location and values. These elements are referred as
picture element, image element &pixels. Pixels used to denote the
element of a digital image. The process of acquiring an image of the
area containing the text, preprocessing that image, extracting the
individual characters , describing the character in the form suitable
for computer processing & recognizing those individual character are
Digital Image Processing. Digital image processing techniques began
in the late 1960s and early 1970s to be used in medical imaging,
remote Earth resource observations and astronomy. The invention in
the early 1970s of computerized axial tomography (CAT) also called
computerized tomography (CT) is one of the most
Ray passes through the object and is collected at the opposite end by
the corresponding detectors in the ring. As the source rotates, this
procedure is repeated. Tomography consist algorithms that use the
sensed data to construct an image that represent the slice through
the object. Computer procedure are used to enhance the contrast or
code the intensity levels into color for easier interpretation of X-Rays
and other images
used in industry, medicine and the biological sciences. Image
enhancement and restoration procedure are used to process
degraded images of unrecoverable objects or experimental result too
expansive to duplicate. Image processing methods have successfully
restored blurred pictures that were the only available records of rare
artifacts lost or damaged after being photographed.

Applications of Digital Image Processing


Almost in every field, digital image processing puts a live effect on
things and is growing with time to time and with new technologies.
1) Image sharpening and restoration
It refers to the process in which we can modify the look and feel of
an image. It basically manipulates the images and achieves the
desired output. It includes conversion, sharpening, blurring,
detecting edges, retrieval, and recognition of images.
2) Medical Field
There are several applications under medical field which depends on
the functioning of digital image processing.
o Gamma-ray imaging
o PET scan
o X-Ray Imaging
o Medical CT scan
o UV imaging
3) Robot vision
There are several robotic machines which work on the digital image
processing. Through image processing technique robot finds their
ways, for example, hurdle detection root and line follower robot.
4) Pattern recognition
It involves the study of image processing, it is also combined with
artificial intelligence such that computer-aided diagnosis,
handwriting recognition and images recognition can be easily
implemented. Now a days, image processing is used for pattern
recognition.
5) Video processing
It is also one of the applications of digital image processing. A
collection of frames or pictures are arranged in such a way that it
makes the fast movement of pictures. It involves frame rate
conversion, motion detection, reduction of noise and colour space
conversion etc.

Fundamental Steps in Digital Image Processing:


Image Acquisition
Image acquisition is the first step in digital image processing. In this
step we get the image in digital form. This is done using sensing
materials like sensor strips and sensor arrays and electromagnetic
wave light source. The light source falls on an object and it gets
reflected or transmitted which gets captured by the sensing material.
The sensor gives the output image in voltage waveform in response
to electric power being supplied to it. The example of a situation
where reflected light is captured is a visible light source. Whereas, in
X-ray light sources transmitted light rays are captured.
Image Acquisition
The image captured is analog image as the output is continuous. To
digitise the image, we use sampling and quantization where
discretize the image. Sampling is discretizing the image spatial
coordinates whereas quantization is discretizing the image amplitude
values.
Sa
mpling and Quantization
Image Enhancement
Image enhancement is the manipulation of an image for its specific
purpose and objectives. This is majorly used in photo beautify
applications. These are performed using filters. The filters are used to
minimise noise in an image. Each filter is used for a specific situation.
Correlation operation is done between filters and input image matrix
to obtain enhanced output image in . To simplify the process, we
perform multiplication in the frequency domain which gives the
same result. We transform the image from spatial domain to
frequency domain using discrete fourier transform (DFT) multiply
with filter and then go back to spatial domain using inverse discrete
fourier transform (IDFT). Some filters used in frequency domain are
butterworth filter and gaussian filter.
Majorly used filters are high pass filter and low pass filter. Low pass
filter smoothens the images by averaging the pixel of neighbouring
value thus minimising the random noise. It gives a blurring effect. It
minimises the sharpening edges. High pass filter is used to sharpen
the images using spatial differentiation. Examples of high pass filters
are laplace filter and high boost filter. There are other non linear
filters for different purposes. For example, a median filter is used to
eliminate salt and pepper noise.
Image Restoration
Like image enhancement, image restoration is related to improving
an image. But image enhancement is more of a subjective step where
image restoration is more of an objective step. Restoration is applied
to a degraded image trying to recover back the original model. Here
firstly we try to estimate the degradation model and then find the
restored image.
We can estimate the degradation by observation, experimentation
and mathematical modelling. Observation is used when you do not
know anything about the setup of the image taken or the
environment. In experimentation, we find the point spread function
of an impulse with a similar setup. In mathematical modelling, we
even consider the environment at which the image was taken and it
is the best out of all the other three methods.

Image Restoration Block Diagram


To find the restored image, we generally use one of the three filters -
inverse filter, minimum mean square (weiner) filter, constrained least
squares filter. Inverse filtering is the simplest method but cannot be
used in presence of noise. In the Wiener filter, mean square error is
minimised. In constrained least error filtering, we have a constraint
and it is the best method.
Colour Image Processing
Colour image processing is motivated by the fact that using colour it
is easier to classify and the human eye can easily see thousands of
colours than shades of black and white. Colour image processing is
divided into types - pseudo colour or reduced colour processing and
full colour processing. In pseudo colour processing, the grey scale is
applied to one colour. It was used earlier. Now-a-days, full colour
processing is used for full colour sensors such as digital cameras or
colour scanners as the price of full colour sensor hardware is reduced
significantly.
There are various colour models like RGB (Red Green Blue), CMY
(Cyan Magenta Yellow), HSI (Hue Saturation Intensity). Different
colour models are used for different purposes. RGB is understandable
for computer monitors. Whereas CMY is understandable for a
computer printer. So there is an internal hardware which converts
RGB to CMY and vice versa. But humans cannot understand RGB or
CMY, they understand HSI.

Colour Models
Wavelets
Wavelets represent an image in various degrees of resolution. It is
one of the members of the class of linear transforms along with
fourier, cosine, sine, Hartley, Slant, Haar, Walsh-Hadamard.
Transforms are coefficients of linear expansion which decompose a
function into a weighted sum of orthogonal or biorthogonal basis
functions. All these transforms are reversible and interconvertible. All
of them express the same information and energy. Hence all are
equivalent. All the transforms vary in only the manner how the
information is represented.
Compression
Compression deals with decreasing the storage required to the image
information or the bandwidth required to transmit it. Compression
technology has grown widely in this era. Many people are
knowledgeable about it by common image extension JPEG (Joint
Photographic Experts Group) which is a compression technology. This
is done by removing redundancy and irrelevant data. In the encoding
process of compression, the image goes through a series of stages -
mapper, quantizer, symbol encoder. Mapper may be reversible or
irreversible. Example of mapper is run length encoding. Quantizer
reduces the accuracy and is an irreversible process. Symbol encoders
assign small values to more frequent data and is a reversible process.

Image Compression Block Diagram


To get back the original image, we perform decompression going
through the stage of symbol decoder and inverse mapper.
Compression may be lossy or lossless. If after compression we get the
exact same image, then it is lossless compression else it is lossy
compression. Examples of lossless compression are huffman coding,
bit plane coding, LZW (Lempel Ziv Welch) coding, (PCM) pulse code
modulation. Examples of lossy compression are JPEG, PNG. Lossy
compression is ideally used in the world as the change is not visible
to the naked eye and saves way better storage or bandwidth than
lossless compress.
Morphological Image Processing
In morphological image processing, we try to understand the
structure of the image. We find the image components present in
digital images. It is useful in representing and describing the images'
shape and structure. We find the boundary, hole, connected
components, convex hull, thinning, thickening, skeletons, etc. It is the
fundamental step for the upcoming stages.
Segmentation
Segmentation is based on extraction information from images on the
basis of two properties - similarity and discontinuity. For example, a
sudden change in intensity value represents an edge. Detection of
isolation points, line detection, edge detection are some of the tasks
associated with segmentation. Segmentation can be done by various
methods like thresholding, clustering, superpixels, graph cuts, region
growing, region splitting and merging, morphological watersheds.
Feature Extraction
Feature extraction is the next step after segmentation. We extract
features from images, regions and boundaries. Example of feature
extraction is corner detection. These features should be independent
and insensitive to variation of parameters such as scaling, rotation,
translation, illumination. Boundary features can be described by
boundary feature descriptors such as shape numbers and chain
codes, fourier descriptors and statistical moments.

Region and Boundary Extraction


Image Pattern Classification
In image pattern classification, we assign labels to images on the
basis of features extracted. For example, classify the image as a cat
image. Classical methods for image pattern classification are
minimum-distance, correlation and Bayes classifier. Modern methods
for the same purpose use neural networks and deep learning models
such as deep convolutional neural networks. This method is ideal for
image processing techniques.
Applications
1. In medical diagnosis, Gamma ray imaging, X-ray imaging,
ultrasound imaging, MRI imaging is used to know about the
internal organs and bones of our body.
2. In satellite imaging and astronomy, infrared imaging is used.
3. In forensics, for biometrics such as thumbprints and retina scan,
digital image processing is used.
4. We can find defects in manufactured packaged goods using
microwave imaging.
5. We can find information about circuit boards and
microprocessors.
6. Using image restoration, we can identify the car number plates
of moving cars from CCTV for police investigations.
7. Beautify filters are used in social media platforms which use
image enhancement.
8. We can classify and identify images using deep learning models.
Conclusion
A picture is worth a thousand words. And the world is filled with
beautiful pictures. To manipulate these images according to our
needs is all digital image processing is. And we live in the world using
advanced digital imaging processing in diverse fields.

Components of Image Processing System


Image Processing System is the combination of the different
elements involved in the digital image processing. Digital image
processing is the processing of an image by means of a digital
computer. Digital image processing uses different computer
algorithms to perform image processing on the digital images.
It consists of following components:-
 Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and
other features of the images and passes the result to the image
processing hardware. It includes the problem domain.
 Image Processing Hardware:
Image processing hardware is the dedicated hardware that is
used to process the instructions obtained from the image
sensors. It passes the result to general purpose computer.
 Computer:
Computer used in the image processing system is the general
purpose computer that is used by us in our daily life.
 Image Processing Software:
Image processing software is the software that includes all the
mechanisms and algorithms that are used in image processing
system.
 Mass Storage:
Mass storage stores the pixels of the images during the
processing.
 Hard Copy Device:
Once the image is processed then it is stored in the hard copy
device. It can be a pen drive or any external ROM device.
 Image Display:
It includes the monitor or display screen that displays the
processed images.
 Network:
Network is the connection of all the above elements of the
image processing system.

Elements of Visual Perception in Digital image processing


In digital image processing, understanding the elements of visual
perception is essential, as it influences how we interpret and process
visual data. Here are the key elements of visual perception in digital
image processing:
1. Brightness
 Brightness refers to the overall intensity or luminance of an
image, influencing how "light" or "dark" it appears. This
perception is relative; an object may look brighter or darker
depending on its surroundings.
2. Contrast
 Contrast is the difference in intensity between adjacent regions
in an image. High contrast makes edges and boundaries more
noticeable, whereas low contrast makes images look flat and
details less visible. Effective contrast manipulation helps in
emphasizing important features.
3. Color Perception
 Color perception involves understanding how humans interpret
colors based on their wavelength. The three primary aspects of
color perception are hue (the type of color), saturation (color
purity), and intensity (brightness of color). Color models like
RGB and HSV are often used in digital image processing.
4. Spatial Frequency
 Spatial frequency refers to the level of detail in an image,
represented as patterns of light and dark. High spatial
frequency corresponds to fine details (like edges and textures),
while low spatial frequency relates to broader regions (like large
blocks of color or shading). Frequency-based techniques like
Fourier Transform exploit this for filtering and feature
extraction.
5. Edges and Contours
 Edges are boundaries where there is a sharp intensity change.
They play a critical role in identifying shapes and boundaries of
objects in an image. Edge detection techniques, such as the
Sobel or Canny algorithms, help in highlighting these regions,
which is important for object recognition and segmentation.
6. Texture
 Texture describes the surface characteristics and visual patterns
within an image, such as smoothness, roughness, or regularity.
Texture analysis helps in identifying materials, surfaces, or
repeating structures, often using techniques like Gabor filters or
wavelet transforms.
7. Depth Perception
 Depth perception provides a sense of three-dimensionality and
distance. In digital imaging, depth cues like shading, lighting,
perspective, and binocular disparity can be used to infer the
spatial relationship of objects.
8. Motion Perception
 Motion perception detects changes in position over time,
essential for video and dynamic image processing. Techniques
like optical flow and frame differencing can capture and analyze
motion in a sequence of images.
9. Gestalt Principles
 Gestalt principles describe how humans tend to organize visual
elements into groups or unified wholes, including proximity,
similarity, closure, continuity, and symmetry. These principles
guide segmentation and pattern recognition algorithms, helping
to group related objects in an image.
10. Context and Familiarity
 Context and familiarity refer to the influence of prior
knowledge and experience on perception. In digital image
processing, this is simulated by using machine learning and
deep learning techniques to recognize objects or scenes based
on learned patterns and contexts.
Summary
Understanding these elements allows for more effective processing,
analysis, and interpretation of visual data in digital images, enabling
applications in fields like computer vision, object detection, medical
imaging, and multimedia processing.
Image Sensing and Acquisition:
Pending:

Image Sampling and Quantization:


In digital image processing, two fundamental concepts are image
sampling and quantization. These processes are crucial for converting
an analog image into a digital form that can be stored, manipulated,
and displayed by computers. Despite being closely related, sampling
and quantization serve distinct purposes and involve different
techniques. This article delves into the definitions, processes, and
differences between image sampling and quantization.
What is Image Sampling?
Image sampling is the process of converting a continuous image
(analog) into a discrete image (digital) by selecting specific points
from the continuous image. This involves measuring the image at
regular intervals and recording the intensity (brightness) values at
those points.
How Image Sampling Works?
 Grid Overlay: A grid is placed over the continuous image,
dividing it into small, regular sections.
 Pixel Selection: At each intersection of the grid lines, a sample
point (pixel) is chosen.
Examples of Sampling
 High Sampling Rate: A digital camera with a high megapixel
count captures more details because it samples the image at
more points.
 Low Sampling Rate: An old VGA camera with a lower resolution
captures less detail because it samples the image at fewer
points.
What is Image Quantization?
Image quantization is the process of converting the continuous
range of pixel values (intensities) into a limited set of discrete
values. This step follows sampling and reduces the precision of the
sampled values to a manageable level for digital representation.
How Image Quantization Works?
 Value Range Definition: The continuous range of pixel values is
divided into a finite number of intervals or levels.
 Mapping Intensities: Each sampled pixel intensity is mapped to
the nearest interval value.
 Assigning Discrete Values: The original continuous intensity
values are replaced by the discrete values corresponding to the
intervals.
Examples of Quantization
 High Quantization Levels: An image with 256 levels (8 bits per
pixel) can represent shades of gray more accurately.
 Low Quantization Levels: An image with only 4 levels (2 bits per
pixel) has much less detail and appears more posterized.
Key Differences Between Image Sampling and Quantization

Feature Image Sampling Image Quantization

Definition Conversion of a Conversion of


continuous image into a continuous pixel
discrete set of points by intensity values into
Feature Image Sampling Image Quantization

selecting specific pixel


discrete levels
positions

Intensity values
Spatial information
(brightness or color
(locations of pixels)
Process Focus levels)

A grid of pixel values A set of discrete


representing spatial intensity values for
Outcome resolution each pixel

Affects color/gray
Affects spatial resolution
level resolution
(detail and clarity of the
Resolution (number of shades or
image)
Aspect colors)

Quantization levels
Sampling rate (number of
Determined (number of intensity
pixels sampled)
By levels)

Higher sampling rate More quantization


Higher Value captures more spatial levels represent finer
Effects detail intensity variations

High megapixel count in


8-bit color depth for
cameras for detailed
more color variations
Example images
Feature Image Sampling Image Quantization

Crucial for
Crucial for applications
applications needing
needing high spatial detail
Application high color fidelity like
like medical imaging
Impact graphic design

Storage Increases with higher Increases with more


Requirement sampling rates quantization levels

Measured in pixels per


Measured in bits per
inch (PPI) or dots per inch
pixel (bpp)
Typical Values (DPI)

Advantages and Disadvantages of Image Sampling and Quantization


Image Sampling:
Advantages:
1. Data Reduction: Converts a continuous signal into a finite set of
points, making storage and processing more manageable.
2. Compatibility: Sampled images are easily processed by digital
systems and algorithms.
3. Resolution Control: Allows for control over image resolution by
adjusting the sampling rate.
Disadvantages:
1. Information Loss: Inevitably loses some information by
approximating a continuous signal.
2. Aliasing: Can cause distortions and artifacts if the sampling rate
is too low.
3. Computationally Intensive: High-resolution sampling demands
significant computational resources and storage space.
Image Quantization:
Advantages:
1. Data Compression: Reduces the amount of data by limiting the
number of possible values for each pixel.
2. Simplified Processing: Makes image processing operations
simpler and faster with fewer distinct values.
3. Noise Reduction: Helps reduce the impact of noise by mapping
small variations in intensity to the same value.
Disadvantages:
1. Loss of Detail: Reduces the range of colors or intensity levels,
leading to a loss of fine detail and potential color banding.
2. Quantization Error: Introduces differences between the original
and quantized values, which can become noticeable.
3. Reduced Image Quality: Overly aggressive quantization can
significantly degrade image quality, making the image appear
blocky or posterized
Conclusion
Understanding the differences and interplay between image
sampling and quantization is crucial for anyone working with digital
images. While sampling determines how finely the image is divided
spatially, quantization determines how precisely the intensity values
are represented. Together, these processes enable the creation of
digital images that can be stored, manipulated, and displayed
effectively.

Last two topics are pending….

You might also like