0% found this document useful (0 votes)
27 views23 pages

CV Unit 3

The document discusses image enhancement and filtering techniques, including various types of filters such as blur, sharpen, and noise reduction, and their applications in image processing, video processing, and medical imaging. It also covers color spaces, particularly the RGB color model, and techniques for adjusting brightness, contrast, and color in images. Additionally, it highlights the importance of filtering in improving image quality, reducing file size, and enhancing visual appeal.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views23 pages

CV Unit 3

The document discusses image enhancement and filtering techniques, including various types of filters such as blur, sharpen, and noise reduction, and their applications in image processing, video processing, and medical imaging. It also covers color spaces, particularly the RGB color model, and techniques for adjusting brightness, contrast, and color in images. Additionally, it highlights the importance of filtering in improving image quality, reducing file size, and enhancing visual appeal.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT III

Image Enhancement and Filtering, Color Spaces, Color Transforms, Histogram Equalization, Advanced
Histogram Equalization (CLAHE), Color Adjustment using Curves, Image Filtering: Introduction to
Image Filtering, what is Convolution, Image Smoothing: - Box Blur, Gaussian Blur, Median Blur
IMAGE ENHANCEMENT AND FILTERING
Filtering

Filtering is the process of manipulating an image to alter its appearance. This can be done by
applying special effects, such as blurring, sharpening, or color correction. Filters are used to
modify the appearance of an image, making it look more interesting or realistic. They can also
be used to create a certain effect, such as making an image look brighter, more vivid, or more
detailed.
Types of Filters

There are several types of filters that can be used to modify an image. Some of the most
common filters are a blur, sharpen, edge detection, color correction, and noise reduction.
 Blur: Blur filters are used to soften the edges of an image, creating a more subtle look.
This can be used to make an image look more natural or to reduce the visibility of
distracting details.
 Sharpen: Sharpen filters are used to make an image appear sharper and clearer. This
can be used to make an image look more detailed or to make the colors more vibrant.
 Edge Detection: Edge detection filters are used to accentuate the outlines of an image.
This can be used to make objects stand out more or to create a more dramatic effect.
 Color Correction: Color correction filters are used to adjust the hue, saturation, and
brightness of an image. This can be used to make an image look more realistic, or to
create a specific color palette.
 Noise Reduction: Noise reduction filters are used to remove unwanted noise from an
image. This can be used to make an image look more natural or to reduce the visibility of
digital artifacts.
How to Use Filters?
Filters can be applied to an image using a variety of software programs. Many programs, such
as Adobe Photoshop and GIMP, include a range of filters that can be used to modify an image.
Some programs also have specialized filters for specific tasks, such as noise reduction or color
correction. When using filters, it is important to be mindful of the effects they can have on an
image. For example, blur filters can make an image look softer, but can also reduce the visibility
of important details. Similarly, sharpened filters can make an image look sharper, but can also
make the colors look overly saturated. It is important to experiment with different filters to find
the right balance of effects.
Applications of Filtering
 Image processing: Filtering techniques can be used to enhance the clarity and detail of
an image. For instance, a low-pass filter can be used to reduce noise, while a high-pass
filter can be used to sharpen the image.
 Video processing: Filtering techniques can be used to reduce noise and enhance the
quality of a video. For instance, a low-pass filter can be used to reduce background
noise, while a high-pass filter can be used to sharpen the video.
 Medical imaging: Filtering techniques can be used to enhance the clarity and detail of
medical images. For instance, a low-pass filter can be used to reduce noise and a high-
pass filter can be used to sharpen the image.
 3D rendering: Filtering techniques can be used to enhance the clarity and detail of 3D
renderings. For instance, a low-pass filter can be used to reduce noise, while a high-
pass filter can be used to sharpen the image.
 Image Enhancement: Filtering is used to enhance the visual appearance of digital
images. It can be used to improve contrast, sharpness, brightness, and color balance. It
can also be used to reduce noise and artifacts.
 Image Restoration: Filtering is also used for image restoration, which is the process of
restoring a degraded image to its original state. This is often done by removing noise,
sharpening the image, and correcting the color balance.
 Image Compression: Filtering can be used to reduce the size of an image without
compromising on its quality. This is achieved by eliminating redundant information in the
image.
 Image Segmentation: Filtering can also be used for image segmentation, which is the
process of separating an image into regions of interest. This is often done by applying a
filter to the image to identify regions with specific characteristics.
Advantages of Filtering
 Improved clarity and detail: Filtering techniques can be used to improve the clarity and
detail of an image. Filtering can reduce noise and sharpen the image, resulting in a more
detailed and accurate picture.
 Enhanced contrast and definition: Filtering techniques can be used to add contrast
and definition to an image. Edge detectors can be used to detect edges and enhance
them, resulting in a more visually appealing image.
 Reduced file size: Filtering techniques can be used to reduce the size of an image. By
reducing the amount of noise in an image, the file size can be significantly reduced. This
can be beneficial for applications where file size is a concern.
 Improved Image Quality: Filtering can improve the visual quality of digital images by
enhancing certain features, reducing noise, and eliminating undesired artifacts.
 Reduced Image Size: Filtering can also be used to reduce the size of an image without
compromising on its quality. This is achieved by eliminating redundant information in the
image.
 Increased Efficiency: Filtering can increase the efficiency of computer graphics
operations, as it can reduce the amount of time and resources needed to process an
image.
 Increased Accuracy: Filtering can also increase the accuracy of computer graphics
operations by eliminating noise and artifacts that can lead to errors.
Conclusion
Filtering is an important part of computer graphics. It is used to modify images, adding effects to
enhance their aesthetic appeal. There are several types of filters that can be used to modify an
image, such as blur, sharpening, edge detection, color correction, and noise reduction. Filters
can be applied to an image using a variety of software programs, but it is important to be
mindful of the effects they can have on an image. With the right filters, an image can be
transformed into something visually stunning.
IMAGE ENHANCEMENT
Image enhancement is the process of improving the quality and appearance of an image. It can
be used to correct flaws or defects in an image, or to simply make an image more visually
appealing. Image enhancement techniques can be applied to a wide range of images, including
photographs, scans, and digital images. Some common goals of image enhancement include
increasing contrast, sharpness, and colorfulness; reducing noise and blur; and correcting
distortion and other defects. Image enhancement techniques can be applied manually using
image editing software, or automatically using algorithms and computer programs such as
OpenCV.
Adjusting brightness and contrast
Adjusting the brightness and contrast of an image can significantly affect its visual appeal and
effectiveness. It can also help to correct defects or flaws in the image and make it easier to see
details. Finding the right balance of brightness and contrast is important for creating an
attractive and effective image.
There are several ways to adjust the brightness and contrast of an image using OpenCV and
Python. One common method is to use the cv2.addWeighted() function, which allows you to
adjust the brightness by adding a scalar value to each pixel in the image, and the contrast by
scaling the pixel values.
Sharpening images
Sharpening is the process of enhancing the edges and fine details in an image to make it
appear sharper and more defined. It is important because it can help to bring out the details and
features in an image, making it more visually appealing and easier to understand. Sharpening
can be used to correct blur or softness in an image and can be applied using a variety of
techniques.
One common method for sharpening images using OpenCV and Python is to use the
cv2.filter2D() function, which convolves the image with a kernel. The kernel can be designed to
enhance the edges in the image, resulting in a sharper image.
Removing noise from images
Noise reduction is the process of removing or reducing unwanted noise or artifacts from an
image. It is important because it can improve the visual quality and clarity of the image and
make it easier to analyze or process using computer algorithms. Noise can be introduced into
an image due to a variety of factors and can degrade its quality. There are several techniques
for reducing noise, including using filters such as the median filter or the Gaussian filter. It is
important to apply noise reduction judiciously to avoid blur or loss of detail in the image.
One common method for removing noise from images using OpenCV and Python is to use a
median filter. The median filter works by replacing each pixel in the image with the median value
of a set of neighboring pixels. This can help to smooth out noise and reduce artifacts in the
image.
Enhancing color in images
Color enhancement is adjusting the colors in an image to make them more vibrant, balanced, or
natural. It can be used to correct color defects or problems in an image or to simply make an
image more appealing and aesthetically pleasing. Color enhancement is important because it
can significantly affect the visual impact and effectiveness of an image. It can also be useful for
correcting color errors or problems in an image and can make it easier to see details and
features in the image. There are several techniques for enhancing the colors in an image,
including adjusting the color balance, adjusting the saturation, and adjusting the hue.
There are several ways to enhance the colors in an image using OpenCV and Python. One
common method is to use the cv2.cvtColor() function, which allows you to convert the image
from one color space to another. This can be useful for adjusting the color balance or saturation
of the image.
Image resizing and scaling
Image resizing and scaling involve adjusting the dimensions and size of an image. Both are
important for meeting specific requirements or context, such as fitting a specific size or aspect
ratio or reducing the file size. There are several techniques, including interpolation methods like
the nearest neighbor, bilinear, and bicubic interpolation. It is important to choose a method that
preserves image quality and clarity.
You can use the cv2.resize() function in OpenCV to resize and scale images. The cv2.resize()
function takes the following arguments:
 src: The input image.
 dsize: The size of the output image, specified as a tuple (width, height).
 fx: The scaling factor along the x-axis.
 fy: The scaling factor along the y-axis.
 interpolation: The interpolation method to use.

Equalizing histograms –
Histogram equalization is a technique used to adjust the contrast of an image by
spreading out the intensity values of the pixels in the image. It is important because it
can improve the contrast and clarity of an image, making it easier to see details and
features that might be difficult to see in an image with low contrast. There are several
different methods for performing histogram equalization, including global histogram
equalization and local histogram equalization. Global histogram equalization adjusts the
contrast of the entire image, while local histogram equalization adjusts the contrast in
small, localized areas of the image.
Other Techniques
Image enhancement is a wide field that involves adjusting images to improve their visual quality
or to make them more suitable for further analysis. There are many techniques for enhancing
images, such as:
 Morphological transformations: These are operations based on the image shape.
They are typically applied to binary images, but can also be used with grayscale images.
Examples include dilation, erosion, opening, closing, etc. Operations can be used to
enhance or modify the shape or structure of objects in an image. To apply morphological
operations with OpenCV and Python, you can use functions such as erode, dilate, and
morphologyEx.
 Edge detection: OpenCV provides several functions for performing edge detection,
such as Canny(), Sobel(), and Laplacian(). These functions can be used to identify
edges in an image by looking for sharp changes in pixel intensity.
 Color correction: OpenCV provides several functions for adjusting the colors in an
image, such as cvtColor() and inRange(). These functions can be used to perform tasks
such as color balance, color grading, and white balancing.
 Image gradients: OpenCV provides several functions for computing image gradients,
such as Scharr(), Sobel(), and Laplacian(). These functions can be used to highlight
changes in pixel intensity in an image and can be useful for tasks such as edge
detection and image segmentation.
 Image cropping: Cropping techniques can be used to remove unwanted areas from an
image. To crop an image, you can use the copyMakeBorder function to create a new
image with the desired dimensions, and then copy the relevant portion of the original
image into the new image.
 Image rotation: Rotation techniques can be used to change the orientation of an image.
To rotate an image with OpenCV, you can use the warpAffine function with a rotation
matrix.
 Image blending: Blending techniques can be used to combine two or more images
together. To blend images with OpenCV and Python, you can use the addWeighted
function to combine the images using a weighted average.
 Image thresholding: Thresholding techniques can be used to convert an image to black
and white by setting a threshold value for pixel intensity. To apply thresholding, you can
use the threshold function.
 Image deblurring: Deblurring techniques can be used to remove blur from an image
caused by camera shake, out-of-focus subjects, or other factors. To deblur an image,
you can use the wiener function, which applies a Wiener filter to the image.
COLOR SPACES
Color spaces are a fundamental concept in image processing and computer vision that play
a crucial role in representing colors. They are used to define a standardized method of
representing colors in digital images, allowing for efficient processing and analysis. RGB is
the most used color space in digital imaging, but other color spaces like CMYK, HSV, and
YUV are also widely used. Each color space has its advantages and disadvantages, and
they are selected based on the specific requirements of the application. Color spaces are
used for various tasks, including color correction, image analysis, and computer
vision, making them a critical aspect of image processing.
Introduction
Color spaces are a crucial aspect of image processing that provides a standardized method
for representing colors in digital images. In simple terms, a color space is a mathematical
model that represents colors as tuples of numbers. The most common color space used in
digital imaging is RGB, which represents colors using red, green, and blue components.
Other commonly used color spaces include CMYK, which is primarily used in printing, and
HSV, which separates color information into hue, saturation, and value components. The
YUV color space is commonly used in video processing, separating color information into
luminance (Y) and chrominance (U and V) components. Color spaces are used for various
image processing tasks, such as color correction, image analysis, and computer vision.
RGB Colour Space
RGB (Red Green Blue) is the most commonly used color space in digital imaging. It
represents colors using red, green, and blue components and can represent a vast range of
colors.
Explanation of the RGB color model
 The RGB model is an additive color model, meaning that the primary colors are added
together to create other colors. When all three primary colors are added together in
equal amounts, they create white. Conversely, when all three primary colors are absent,
they create black.
 Each color in the RGB color model can be represented as a tuple of three values, with
each value ranging from 0 to 255. The first value represents the intensity of red, the
second value represents the intensity of green, and the third value represents the
intensity of blue.
 The RGB color model is widely used in digital imaging because it is compatible with
many display devices such as monitors and televisions. It can represent a vast range of
colors, making it suitable for applications such as web design and digital photography.
Properties and Characteristics of the RGB Color Space
The RGB (Red Green Blue) color space has several properties and characteristics that
make it suitable for various applications in image processing. Some of the key properties
and characteristics of the RGB color space include:
 Additive Color Model: The RGB color space is based on the additive color model,
where primary colors are combined to produce other colors. This makes it ideal for
display applications such as computer monitors and televisions.
 Large Color Gamut: The RGB color space has a large color gamut, which means it can
represent a wide range of colors.
 Three Components: The RGB color space uses three components - red, green, and
blue - to represent colors. This makes it easy to work with in digital imaging and allows
for precise control of the color.
 Device-Dependent: The RGB color space is device-dependent, meaning that the way
colors are displayed may vary depending on the display device. This can cause issues
with color accuracy and consistency.
Techniques for Converting Between RGB and Other Color Spaces
There are several techniques for converting between RGB (Red Green Blue) and other color
spaces in image processing. Some of the common techniques include:
 RGB to CMYK Conversion: To convert an RGB image to a CMYK (Cyan Magenta
Yellow Black) image, the RGB color values are first converted to a device-independent
color space such as Lab, and then to CMYK using a color management system.
 RGB to HSV Conversio: To convert an RGB image to an HSV (Hue Saturation Value)
image, the RGB color values are first normalized, and then the hue, saturation, and
value components are calculated based on the normalized RGB values.
 RGB to YUV Conversion: To convert an RGB image to a YUV (Luminance
Chrominance) image, the RGB color values are first normalized, and then the Y
(luminance) and U and V (chrominance) components are calculated based on the
normalized RGB values.
 RGB to XYZ Conversion: To convert an RGB image to an XYZ (CIE 1931 XYZ) image,
the RGB color values are first normalized, and then the X, Y, and Z components are
calculated based on the normalized RGB values.
It is important to choose the appropriate technique based on the specific requirements of the
task and the characteristics of the color spaces being used.
HSV Color Space
HSV (Hue Saturation Value) is a color space that is commonly used in image processing
and computer graphics. The HSV color space is based on the RGB (Red Green Blue) color
model, but it represents color information differently.
Explanation of the HSV Color Model
The HSV (Hue Saturation Value) color model is a cylindrical color space that represents
colors based on three components: hue, saturation, and value.
 Hue: This component represents the pure color without any white or black added to it. It
is measured in degrees from 0 to 360, with red at 0 degrees, green at 120 degrees, and
blue at 240 degrees. The hue component can be thought of as a circular color wheel
where the hues are arranged in a particular order.
 Saturation: This component represents the intensity or purity of the color. A
fully saturated color has no white or black added to it, while a desaturated color has
more white or black added to it. Saturation is measured as a percentage from 0%
(completely desaturated) to 100% (fully saturated).
 Value: This component represents the brightness or lightness of the color. Value is
measured as a percentage from 0% (completely black) to 100% (fully bright).
Converting between the RGB and HSV color spaces is a common operation in image
processing, and there are various techniques for doing so.
Properties and characteristics of the HSV color space
The HSV (Hue Saturation Value) color space has several properties and characteristics that
make it useful in image processing and computer graphics applications:
 Intuitive color representation: The HSV color space represents colors more intuitively
compared to the RGB color space. The hue component represents the pure color, while
the saturation and value components represent the intensity and brightness of the color,
respectively.
 Separation of color information: The HSV color space separates the color information
into three distinct components, which makes it easier to manipulate and adjust the colors
in an image.
 Useful for color-based object detection: The hue component of the HSV color space
is particularly useful for color-based object detection. This is because it allows us to
define a range of hues that correspond to a specific color, which can be used to isolate
objects of that color in an image.
 Easy to convert to and from RGB: While the HSV color space is based on the RGB
color model, it is relatively easy to convert between the two color spaces using standard
conversion formulas.
 Limited range of hue values: The hue component of the HSV color space has a limited
range of values from 0 to 360 degrees. This means that some colors, such as pink,
cannot be accurately represented in the HSV color space.
Overall, the HSV color space is a useful tool for representing and manipulating color
information in image processing and computer graphics applications.
Techniques for converting between HSV and other color spaces
Converting between the HSV (Hue Saturation Value) color space and other color spaces is a
common operation in image processing, and there are several techniques for doing so.
 One common technique for converting from RGB to HSV involves first normalizing the
RGB values to the range [0, 1]. Then, the maximum and minimum of the three RGB
components are found, and the saturation is calculated as the difference between the
maximum and minimum divided by the maximum. The value is calculated as the
maximum of the RGB components, and the hue is calculated based on the maximum
RGB component and the difference between the other two components.
 Converting from HSV to RGB can be done by first calculating the chroma (i.e., the
maximum color intensity) based on the saturation and value components. Then, the hue
is used to determine the position of the color on the color wheel and the red, green, and
blue components are calculated based on the hue and chroma.
 Another technique for converting between the HSV color space and other color spaces
involves using lookup tables. These tables can be pre-calculated and stored in memory,
allowing for fast and efficient conversion between color spaces.
However, regardless of the technique used, it is important to ensure that the conversion is
accurate and that the color information is preserved as much as possible.
YUV Color Space
The YUV color space is a color model used in video and image processing applications. It
separates the color information into three components: Y (luma), U (chrominance blue), and
V (chrominance red). The Y component represents the brightness or luminance of the
image, while the U and V components represent the color information.
Explanation of the YUV color model
 The Y component represents the brightness or luminance of the image and is typically
represented as a grayscale image. The U and V components represent the color
information and are represented as color difference signals. The U component
represents the difference between the blue component and the luma component, while
the V component represents the difference between the red component and the luma
component.
 The separation of the color information into these three components allows for efficient
storage and transmission of video data, as the luminance information is often more
important than the color information. In addition, the YUV color model can be used for
video compression applications, as it allows for separate compression of the luminance
and chrominance components.
 The YUV color model is often used in conjunction with other color spaces, such as RGB
or HSV. Various techniques exist for converting between the YUV color space and other
color spaces, including matrix transformations and lookup tables. These techniques aim
to preserve as much of the original color information as possible, while also ensuring
that the converted image or video is visually accurate and perceptually pleasing.
Properties and characteristics of the YUV color space
The YUV color space has several properties and characteristics that make it useful for video
and image processing applications:
 Luma and chrominance separation: The YUV color space separates the luminance
(Y) and chrominance (U and V) information, which allows for more efficient compression
and processing of video data.
 Grayscale representation: The Y component represents the grayscale or black-and-
white image, which can be used in applications where only the brightness information is
needed.
 Color difference representation: The U and V components represent the color
difference signals, which can be used to reconstruct the full-color image.
 Independent color channels: The U and V components are independent of each other,
which means that changes to one channel do not affect the other channels.
 Compatibility with other color spaces: The YUV color space is often used in
conjunction with other color spaces, such as RGB or HSV, and conversion techniques
exist to convert between them.
 Perceptual uniformity: The YUV color space is designed to be perceptually uniform,
which means that equal differences in color values should appear to be equal to the
human eye.
Techniques for converting between YUV and other color spaces
There are several techniques for converting between the YUV color space and other color
spaces, such as RGB or HSV. Some of these techniques include:
 Matrix transformations: This involves using a matrix to convert between the YUV and
RGB color spaces. The transformation matrix is typically pre-calculated and stored for
use in conversion operations.
 Lookup tables: This involves using a table to look up the corresponding RGB or YUV
values for a given input value. The table is typically pre-calculated and stored for use in
conversion operations.
 Color space conversion software: There are various software tools available that can
perform color space conversions between YUV and other color spaces. These tools
typically use algorithms to perform the conversion and may offer various options for
customizing the conversion process.
 Hardware-based conversion: Some video processing hardware may include dedicated
circuits for performing color space conversions in real time. These circuits can provide
fast and efficient conversion between YUV and other color spaces.
 Hybrid methods: Some conversion techniques may use a combination of the above
methods to achieve the best possible results in terms of accuracy, speed, and efficiency.
Other Color Spaces
CMYK Color Space

 The CMYK color space is commonly used in image processing applications that are
focused on printing, such as graphic design and commercial printing. CMYK stands for
cyan, magenta, yellow, and key (black), and is a subtractive color model that is based on
the principle of subtracting light from white paper. In CMYK, colors are created by
subtracting light from the white paper, whereas in RGB, colors are created by adding
light to black.
 One advantage of CMYK is that it is capable of reproducing a wide range of colors,
which makes it ideal for printing applications. However, one disadvantage is that
the color gamut is somewhat limited compared to RGB, which can result in color
accuracy issues when converting between the two color spaces.
Lab Color Space
 The Lab color space, also known as CIELAB, is a device-independent color space that is
designed to be perceptually uniform. It separates the lightness (L) from the a and b
chrominance components, which represent the green-red and blue-yellow color
differences. This makes it a useful color space for image processing applications that
require accurate and consistent color representation across different devices and
viewing conditions.
 One advantage of the Lab color space is that it is more perceptually uniform than other
color spaces like RGB or CMYK, meaning that equal distances in Lab color space
correspond to roughly equal differences in perceived color. This makes it easier to make
precise adjustments to color balance and contrast in images, without introducing visual
artifacts or color shifts.
HSL Color Space
 The HSL (Hue, Saturation, Lightness) color space is a color model that represents colors
using hue, saturation, and lightness values. Hue represents the actual color, saturation
represents the intensity or purity of the color, and lightness represents the perceived
brightness of the color.
 One of the key advantages of the HSL color space is that it separates the color
information into three distinct components, making it easy to manipulate and adjust
individual color characteristics. This makes it a popular choice for image processing
tasks such as color correction, color grading, and image enhancement.
Techniques for Color Space Visualization
Color space visualization is an important aspect of image processing, as it helps in
understanding and manipulating the color information in images. By using the right
techniques and tools, it is possible to gain insights into how color information is represented
in different color spaces and make more informed decisions when processing images
Displaying images in different color spaces
 This technique involves converting an image from its original color space to a different
color space and then displaying it. By doing this, you can see how the image's
appearance changes depending on the color space it's displayed in.
Visualizing color histograms
 This technique involves plotting the distribution of colors in an image. It can help you
understand how different colors are represented in the image and how they relate to
each other. For example, you can see which colors are dominant in the image or how
different color channels contribute to the overall color of the image.
Converting color spaces in real-time
 This technique involves converting an image from one color space to another in real time
as the image is being processed. It's useful for applications that require color space
conversion, such as video processing or real-time image analysis.
Overall, these techniques help in understanding the different color spaces and how they
represent color information, which is essential for image processing and computer vision
applications. :::section.{main}
Examples of Color Spaces in Image Processing
Color-based object detection and segmentation
 This involves using color information to detect and segment objects in an image. For
example, in medical imaging, color-based segmentation can be used to segment tumors
from surrounding tissue.
Color-based tracking of moving objects

 This involves using color information to track the movement of objects in a video
sequence. For example, in sports analysis, color-based tracking can be used to track the
movement of a ball in a game.
Color correction and adjustment of images
 This involves adjusting the color of an image to correct for color imbalances or to
achieve a specific artistic effect. For example, in photo editing, color correction can be
used to adjust the white balance or to enhance the vibrancy of certain colors.
Color-based image retrieval and indexing
 This involves using color information to retrieve or index images in a database. For
example, in image search engines, color-based indexing can be used to retrieve images
that match a specific color query.
Overall, these examples demonstrate the practical applications of color spaces in image
processing, ranging from scientific analysis to artistic expression.
COLOR TRANSFORMS
Color transforms
Color transforms are mathematical operations used to manipulate the color information in digital
images. Here are some common types of color transforms in digital image processing.
Color correction transforms (e.g., white balance correction)
Color correction transforms are used to adjust the colors in an image to match a particular
standard or desired appearance. One common example of a color correction transform is white
balance correction, which is used to correct the color cast caused by the color temperature of
the light source used to capture an image. White balance correction adjusts the color balance of
an image to ensure that whites appear white, and colors appear as they should.
Color space conversion transforms (e.g., RGB to HSL conversion)
Color space conversion transforms are used to convert an image from one color model to
another. For example, an RGB image can be converted to the HSL or HSV color models for
color manipulation and adjustment. Color space conversion transforms are often used in image
compression, where converting an image to a different color space can reduce its file size
without significant loss of image quality.
Color enhancement transforms (e.g., contrast enhancement)
Color enhancement transforms are used to improve the visual appearance of an image by
adjusting color contrast, saturation, and other color-related properties. For example, contrast
enhancement can be used to increase the visual separation between colors in an image, while
saturation enhancement can be used to make colors appear more vivid and intense.
Color quantization transforms (e.g., reducing the number of colors in an image)
Color quantization transforms are used to reduce the number of colors in an image
by mapping the original color values to a smaller set of color values. This can be useful for
reducing the file size of an image or for simplifying its visual appearance. One common example
of a color quantization transform is dithering, which is used to simulate the appearance of
additional colors by mixing existing colors in a pattern.
Algorithms For Color Transforms
Standard algorithms (e.g., gamma correction, histogram equalization)
Standard algorithms for color transforms are well-established mathematical techniques that
have been used in image processing for many years. Examples of standard algorithms include
gamma correction and histogram equalization. Gamma correction is used to adjust the
brightness of an image by applying a power function to the pixel values. Histogram equalization
is used to enhance the contrast of an image by adjusting the distribution of pixel values in the
image. These algorithms are widely used because they are computationally efficient and can be
applied to a wide range of images.
Machine learning-based algorithms (e.g., deep neural networks)
Machine learning-based algorithms for color transform in digital image processing involve using
deep neural networks or other machine learning techniques to learn complex mappings
between input and output color spaces. These algorithms can be trained on large datasets of
labeled images to learn how to perform tasks such as color correction, colorization, and color
transfer. Machine learning-based algorithms can be more accurate than standard algorithms
and can be adapted to specific types of images or applications. However, they can also be more
computationally intensive and may require more specialized hardware or software to implement.
Applications of color transforms
Image editing
Color transforms are commonly used in image editing applications to adjust the color balance,
saturation, and hue of an image. Color balance refers to the adjustment of the relative
intensities of the primary colors (red, green, and blue) in an image to achieve a desired color
temperature. Saturation refers to the intensity or purity of colors in an image, and hue refers to
the color itself, such as blue, red, or green. By adjusting these parameters using color
transforms in digital image processing, image editors can improve the overall look and feel of an
image, enhance specific colors or regions, or achieve a particular artistic effect.
Image compression
Color transforms can also be used in image compression algorithms to reduce the number of
colors in an image and decrease its file size. One popular approach is to convert the image from
the RGB color space to a color space that better represents the human visual system, such as
YUV or YCbCr. This transformation separates the luminance (brightness) information from the
chrominance (color) information, allowing for more efficient compression. The chrominance
information can then be subsampled or quantized to further reduce the file size without a
significant loss of visual quality.
Computer Vision

Color transforms in digital image processing are frequently used in computer vision applications
such as object detection and face recognition. For example, skin color segmentation can be
used to detect faces in an image or video by separating the skin-colored pixels from the rest of
the image. This technique is particularly effective in situations where there is a high degree of
contrast between the skin color and the background. Color transforms in digital image
processing can also be used to improve the accuracy of object recognition algorithms by
enhancing certain color features or reducing the impact of lighting variations.
Medical Imaging
Color transforms are used extensively in medical imaging to identify various structures and
tissues in the body, such as tumors, blood vessels, and bones. In magnetic resonance imaging
(MRI), for example, color maps can be used to represent different tissue types or to highlight
specific areas of interest. Color transforms in digital image processing can also be used in other
imaging modalities such as X-ray, computed tomography (CT), and ultrasound to enhance the
visibility of certain features or to provide a more intuitive representation of the data.
HISTOGRAM EQUALIZATION
Histogram
Histogram is a graphical representation of the intensity distribution of an image. In simple terms,
it represents the number of pixels for each intensity value considered.
In the above figure, X-axis represents the tonal scale (black at the left and white at the right),
and Y-axis represents the number of pixels in an image. Here, the histogram shows the number
of pixels for each brightness level (from black to white), and when there are more pixels, the
peak at the certain brightness level is higher.
Histogram Equalization
Histogram Equalization is a computer image processing technique used to improve contrast in
images. It accomplishes this by effectively spreading out the most frequent intensity values, i.e.
stretching out the intensity range of the image. This method usually increases the global
contrast of images when its usable data is represented by close contrast values. This allows for
areas of lower local contrast to gain a higher contrast.

A color histogram of an image represents the number of pixels in each type of color component.
Histogram equalization cannot be applied separately to the Red, Green and Blue components of
the image as it leads to dramatic changes in the image’s color balance. However, if the image is
first converted to another color space, like HSL/HSV color space, then the algorithm can be
applied to the luminance or value channel without resulting in changes to the hue and saturation
of the image.
Adaptive Histogram Equalization
Adaptive Histogram Equalization differs from ordinary histogram equalization in the respect that
the adaptive method computes several histograms, each corresponding to a distinct section of
the image, and uses them to redistribute the lightness values of the image. It is therefore
suitable for improving the local contrast and enhancing the definitions of edges in each region of
an image.

Contrastive Limited Adaptive Equalization


Contrast Limited AHE (CLAHE) differs from adaptive histogram equalization in its contrast
limiting. In the case of CLAHE, the contrast limiting procedure is applied to each neighborhood
from which a transformation function is derived. CLAHE was developed to prevent the over
amplification of noise that adaptive histogram equalization can give rise to.

Experimental Results

ADVANCED HISTOGRAM EQUALIZATION(CLAHE)


CLAHE is a variant of Adaptive histogram equalization (AHE) which takes care of over-
amplification of the contrast. CLAHE operates on small regions in the image, called tiles, rather
than the entire image. The neighboring tiles are then combined using bilinear interpolation to
remove the artificial boundaries.
This algorithm can be applied to improve the contrast of images. We can also apply CLAHE to
color images, where usually it is applied on the luminance channel and the results after
equalizing only the luminance channel of an HSV image are much better than equalizing all the
channels of the BGR image.
Parameters:
When applying CLAHE, there are two parameters to be remembered:
 clipLimit – This parameter sets the threshold for contrast limiting. The default value is
40.
 tileGridSize – This sets the number of tiles in the row and column. By default this is 8×8.
It is used while the image is divided into tiles for applying CLAHE.
Example:
 Python3
import cv2
import numpy as np

# Reading the image from the present directory


image = cv2.imread("image.jpg")
# Resizing the image for compatibility
image = cv2.resize(image, (500, 600))

# The initial processing of the image


# image = cv2.medianBlur(image, 3)
image_bw = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# The declaration of CLAHE


# clipLimit -> Threshold for contrast limiting
clahe = cv2.createCLAHE(clipLimit=5)
final_img = clahe.apply(image_bw) + 30

# Ordinary thresholding the same image


_, ordinary_img = cv2.threshold(image_bw, 155, 255, cv2.THRESH_BINARY)

# Showing the two images


cv2.imshow("ordinary threshold", ordinary_img)
cv2.imshow("CLAHE image", final_img)

Input image:
input image
Output:

Ordinary Threshold

CLAHE Applied
COLOR ADJUSTMENT USING CURVES

 Curves is a powerful adjustment tool in Photoshop that can be used for color correction
and adjusting brightness and contrast.
 Properly exposed and color corrected images usually have the histogram evenly spread
between the darkest black and brightest white. Any empty areas of the histogram
indicate that the image’s pixels are not properly mapped to the range of dark to light.
 Use the set white point tool to color-correct an image with just one click. However, this
tool doesn't always work, underscoring the need to understand how to adjust color
manually.
 Curves can be used to adjust color balance by adding points in the middle of the line in
the Blue, Green and Red channels and dragging up or down to alter the color balance.
 Brightness and contrast can be adjusted by clicking on the diagonal line in the RGB
menu and dragging it up or down.
IMAGE FILTERING: INTRODUCTION TO IMAGE FILTERING
What Is Meant By Image Filtering?
Image filtering is a technique that is utilized in image processing to enhance or revise the
visual appearance of the image. Image filtering encompasses using a filter/kernel for every pixel
in an image so that a new pixel value can be acquired based on the values of the existing
pixels. Here, the filter helps in defining the weights that have been applied to the neighboring
pixels at the time of the filtering method.
What Are The Most Frequently Employed Image Filtering Techniques?
I. The Gaussian Filter: The Gaussian filter is a common smoothing filter that blurs and
eliminates noise by convolving the picture with a Gaussian function.
II. Median Filter: The median filter replaces each pixel's value with the median value of its
neighborhood. It is a non-linear filter. It successfully eliminates impulsive noise, sometimes
known as "salt and pepper" noise, from a picture.
III. Bilateral Filter: The bilateral filter soothes a picture while keeping crucial edges. It is a non-
linear, edge-preserving filter. When determining the filter weights, it takes both spatial distance
and pixel intensity similarity into account.
IV. Sobel Operator: The Sobel operator computes an image's gradient and is an edge
detection filter. It is frequently used to draw attention to steep gradients on edges.
V. The Laplacian of Gaussian (LoG): This filter combines the Laplacian and Gaussian filters.
The Laplacian of the Gaussian function is convolved with an image to improve edges while
lowering noise.
VI. High-Pass Filters: High-pass filters emphasize an image's edges and fine details by
enhancing its high-frequency components. The Laplacian filter and the unsharp mask filter are
two examples.
VII. Low-Pass Filters: Low-pass filters smooth or blur a picture by reducing the high-frequency
components of the image. Examples of low-pass filters are the Gaussian and mean filters.
VIII. Morphological Filters: Based on the ideas of mathematical morphology, morphological
filters are utilized for processes including erosion, dilation, opening, and closure. These filters
work well at eliminating background noise, bridging holes, and separating related items.
IX. Anisotropic Diffusion: This method is utilized for edge-preserving smoothing and picture
denoising. It blurs pixel values while keeping an image's important edges.
X. Adaptive Filters: Adaptive filters modify their settings according to the specifics of the local
picture. They are helpful in situations when an image's spatial qualities fluctuate.
WHAT IS CONVOLUTION
This other method is known as convolution. Usually the black box(system) used for image
processing is an LTI system or linear time invariant system. By linear we mean that such a
system where output is always linear , neither log nor exponent or any other. And by time
invariant we means that a system which remains same during time.
So now we are going to use this third method. It can be represented as.

It can be mathematically represented as two ways


g(x,y) = h(x,y) * f(x,y)
It can be explained as the “mask convolved with an image”.
Or
g(x,y) = f(x,y) * h(x,y)
It can be explained as “image convolved with mask”.
There are two ways to represent this because the convolution operator(*) is commutative. The
h(x,y) is the mask or filter.
What is mask?
Mask is also a signal. It can be represented by a two dimensional matrix. The mask is usually of
the order of 1x1, 3x3, 5x5, 7x7 . A mask should always be in odd number, because other wise
you cannot find the mid of the mask. Why do we need to find the mid of the mask. The answer
lies below, in topic of, how to perform convolution?

How to perform convolution?


In order to perform convolution on an image, following steps should be taken.
 Flip the mask (horizontally and vertically) only once
 Slide the mask onto the image.
 Multiply the corresponding elements and then add them
 Repeat this procedure until all values of the image has been calculated.
Example of convolution
Let’s perform some convolution. Step 1 is to flip the mask.
Mask
Let’s take our mask to be this.

1 2 3

4 5 6

7 8 9

Flipping the mask horizontally

3 2 1

6 5 4

9 8 7

Flipping the mask vertically

9 8 7

6 5 4

3 2 1

Image
Let’s consider an image to be like this

2 4 6

8 10 12

14 16 18

Convolution
Convolving mask over image. It is done in this way. Place the center of the mask at each
element of an image. Multiply the corresponding elements and then add them , and paste the
result onto the element of the image on which you place the center of mask.
The box in red color is the mask, and the values in the orange are the values of the mask. The
black color box and values belong to the image. Now for the first pixel of the image, the value
will be calculated as
First pixel = (5*2) + (4*4) + (2*8) + (1*10)
= 10 + 16 + 16 + 10
= 52
Place 52 in the original image at the first index and repeat this procedure for each pixel of the
image.
Why Convolution
Convolution can achieve something, that the previous two methods of manipulating images
can’t achieve. Those include the blurring, sharpening, edge detection, noise reduction e.t.c.
IMAGE SMOOTHING:-BOX BLUR, GAUSSIAN BLUR, MEDIAN BLUR
Smoothing is often used to reduce noise within an image or produce a less pixelated image.
Image smoothing is a key image enhancement technology that can remove noise within the
images. So, it is a mandatory functional module in various image-processing software.
Image smoothing is a method of improving the quality of images. Image quality is an important
factor for human vision. The image usually has noise that is not easily eliminated in image
processing. The quality of the image is affected by the presence of noise. Many methods are
there for removing noise from images. Many image processing algorithms can't work well in a
noisy environment, so an image filter is adopted as a preprocessing module. However, the
capability of conventional filters based on pure numerical computation is broken down rapidly
when they are put in a heavily noisy environment.
Uses of image smoothening
1. Smoothing filters are used in noise reduction and blurring.
2. Blurring is used in preprocessing tasks, such as the removal of small details from an
image before (large) object extraction.
3. Smoothing is used to produce less pixelated images.
BOX BLUR
Box blur is a linear smoothing technique that convolves the image with a box kernel. This
technique is effective in reducing uniform noise in an image while smoothing the image. To
perform box blur in OpenCV, we can use the cv2.boxFilter() function, which takes the input
image, the depth of the output image, the size of the kernel, and the flag for normalization as
input parameters.
GAUSSIAN BLUR
Gaussian blur is a linear smoothing technique that convolves the image with a Gaussian kernel.
This technique is effective in reducing Gaussian noise in an image while smoothing the image.
To perform Gaussian blur in OpenCV, we can use the cv2.GaussianBlur() function, which takes
the input image, the size of the kernel, and the sigma value for the X and Y direction as input
parameters.

MEDIAN BLUR
The Median blur operation is similar to the other averaging methods. Here, the central element
of the image is replaced by the median of all the pixels in the kernel area. This operation
processes the edges while removing the noise.
You can perform this operation on an image using the medianBlur() method of
the imgproc class. Following is the syntax of this method −
medianBlur(src, dst, ksize)
This method accepts the following parameters −
 src − A Mat object representing the source (input image) for this operation.
 dst − A Mat object representing the destination (output image) for this operation.
 ksize − A Size object representing the size of the kernel.

You might also like