Chapter 5 DIP
Chapter 5 DIP
CMY
YIQ etc..
1 The RGB Model
In the RGB model, an image consists of three independent
image planes, one in each of the primary colors: red, green and
blue.
Specifying a particular color is by specifying the amount of
each of the primary components present.
The below figure shows the geometry of the RGB color model
for specifying colors using a Cartesian coordinate system. The
greyscale spectrum, i.e. those colors made from equal amounts
of each primary, lies on the line joining the black and white
vertices.
Figure 2: The RGB color cube. The greyscale spectrum lies on the line joining the
black and white vertices.
This is an additive model, i.e. the colors present in the light add to
form new colors, and is appropriate for the mixing of colored light
for example.
The image on the left of figure 3 shows the additive mixing of red,
green and blue primaries to form the three secondary colors yellow
(red + green), cyan (blue + green) and magenta (red + blue), and
white ((red + green + blue).
The RGB model is used for color monitors and most video cameras.
2 The CMY Model
Figure: 2
3. The YIQ Model
The YIQ (luminance-inphase-quadrature) model is a recoding
of RGB for color television, and is a very important model for
color image processing.
The luminance (Y) component contains all the information
required for black and white television, and captures our
perception of the relative brightness of particular colors.
Color Transformations
Color transforms are fundamental operations in digital image
processing used to manipulate the color information in an
image.
These transforms allow us to change an image's color space,
alter color balance, adjust brightness and contrast, and perform
a variety of other color-based operations.
There are several color models used in digital image
processing, each with its unique characteristics and
applications.
Understanding color transforms in digital image processing is
crucial for many image processing applications, including
color correction, color enhancement, and image compression.
Morphological image processing and Image
segmentation
Morphological Image Processing:
Morphological operations are simple yet powerful techniques
based on the shape of an image. Two fundamental
morphological operators are:
Erosion: Shrinks the white regions in an image by
the boundaries.
These operations are used to modify the shapes and sizes of
objects in an image and are particularly useful in applications
such as image segmentation, feature extraction, and noise
reduction.
Erosion and dilation are binary operations that are performed
on a binary image, which is an image consisting of only black
and white pixels.
In these operations, a structuring element, which is a small
binary image, is used to modify the shape of the input image.
The following image represents original, erosion and dilation
images respectively.
Image segmentation
Image segmentation is a method in which a digital image is
broken down into various subgroups called Image segments
which helps in reducing the complexity of the image to make
further processing or analysis of the image simpler.
Segmentation in easy words is assigning labels to pixels.
All picture elements or pixels belonging to the same category
have a common label assigned to them.
Image segmentation involves dividing an image into
meaningful regions or segments.
For example: Let’s take a problem where the picture has to be
provided as input for object detection. Rather than processing
the whole image, the detector can be inputted with a region
selected by a segmentation algorithm.
This will prevent the detector from processing the whole image
thereby reducing inference time.
segmentation
Image segmentation
06/13/2024 20
Types of machine learning
There are two main types of machine learning.
Supervised and unsupervised learning
In supervised learning, the machine is trained on a set of labeled
data, which means that the input data is paired with the desired
output.
The machine then learns to predict the output for new input data.
In unsupervised learning, the machine is trained on a set of
unlabeled data, which means that the input data is not paired with
the desired output.
The machine then learns to find patterns and relationships in the
data.
Clustering algorithms group similar data points together based on
their inherent characteristics.
Deep Learning
Deep learning is a subset of machine learning, which is
essentially a neural network with layers of interconnected nodes
that process and transform data.
These neural networks are inspired by the structure and function
of the human brain’s biological neurons, and they are designed to
learn from large amounts of data.
It is capable of learning complex patterns and relationships
within data.
Convolutional neural networks(CNNs) examples of deep learning
algorithms and it specifically designed for image processing and
object detection.
machine learning and deep learning both are subsets of artificial intelligence but
there are many similarities and differences between them
Machine Learning Deep Learning
Apply statistical algorithms to learn the hidden patterns Uses artificial neural network architecture to learn the
and relationships in the dataset. hidden patterns and relationships in the dataset.
Takes less time to train the model. Takes more time to train the model.