0% found this document useful (0 votes)
2 views

Module-1_Chapter3 Image Processing

This document covers various image processing techniques, including point operators, pixel transforms, and linear filtering methods. It discusses concepts such as gamma correction, image matting, histogram equalization, and neighborhood operations, along with examples of different filters like Gaussian and Sobel filters. Additionally, it addresses issues related to border effects and solutions for padding during image processing operations.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module-1_Chapter3 Image Processing

This document covers various image processing techniques, including point operators, pixel transforms, and linear filtering methods. It discusses concepts such as gamma correction, image matting, histogram equalization, and neighborhood operations, along with examples of different filters like Gaussian and Sobel filters. Additionally, it addresses issues related to border effects and solutions for padding during image processing operations.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

COMPUTER VISION (Module-1 Chapter-3)

Image Processing
Dr. Ramesh Wadawadagi
Associate Professor
Department of CSE
SVIT, Bengaluru-560064
[email protected]
Visualizing Image Data (Grey scale)
Visualizing Image Data (Color image)
Point operators
● The simplest kinds of image processing
transforms are point operators, where each
output pixel’s value depends on only the
corresponding input pixel value.
● Examples of such operators include
brightness and contrast adjustments as well
as color correction and transformations.
● Point operators are also called as pixel
transforms.
Pixel transforms
● A general image processing operator is a function
that takes an input image and produces an output
image.
● In the continuous domain, this can be denoted as:

g(x) = h(f(x))
● where x is in the dimension of the input and output

functions f and g (2D in case of images).


● For discrete images, the domain consists of a finite

number of pixel locations, x = (i, j), and we can


write as:
g(i, j) = h(f(i, j))
1. Multiplication and addition with a constant.
● Two commonly used point processes are
multiplication and addition with a constants:
g(x) = a*f(x) + b
● The parameters a > 0 and b are often called the gain

and bias parameters;


● Sometimes these parameters are said to control

contrast and brightness, respectively.


● The bias and gain parameters can also be spatially

varying,
g(x) = a(x)*f (x) + b(x)
1. Multiplication and addition with a constant.

For a = 1.4; b = 30
2. Dyadic (two-input) operator
● Another commonly used dyadic (two-input) operator
is the linear blend operator.
g(x) = (1 − α)f0(x) + αf1(x).
● By varying α from 0 → 1, this operator can be used

to perform a temporal cross-dissolve between two


images or videos, as seen in slide shows and film
production, or as a component of image morphing
algorithms.
3. Gamma Correction (power law):
● Gamma correction is a nonlinear process that adjusts the
brightness of images to match how humans perceive light.
● Gamma correction applies a power function to each pixel
value in an image.
● The relationship between the input Vignetting
signal brightness Y and
the transmitted signal Y` is given by Y` = Y1/γ. or
g(x) = [f(x)]1/γ
● Gamma values less than 1 make the image darker.
● Gamma values greater than 1 make the image lighter.
● A gamma value of 1 has no effect on the input image.
● where a gamma value of γ ≈ 2.2 is a reasonable fit for most
digital cameras.
Gamma correction: Example

For γ = 2.2
Color transforms: Image composition and
matting
● Image matting is a technique that separates an object from
its background by estimating the transparency of each pixel
in an image.
● Vignetting
It's a key technique in image processing and is used in many
image and video editing applications.
● Formally, matting techniques take as input in image F,
which is assumed to be a convex combination of a
foreground image and a background image B as:
C = (1 − α)B + αF
● where α is the pixel's foreground opacity or matte.
Image matting and composition

Vignetting
Image Histogram:
● Image histogram shows frequency of pixel intensity values.
● x axis shows the gray level intensities

● y axis shows the frequency of intensities.

Vignetting

For 8 bits image, we have 256 levels of gray shades


Image transformation: Thresholding
Original
image

Vignetting
Histogram sliding
Brightness is changed by shifting the histogram to
left or right.

+50
Histogram equalization

Histogram equalization is used for enhancing the
contrast of the images.

The first two steps are calculating the PDF and
CDF.

All pixel values of the image will be equalized.
Histogram equalization
Image with its histogram.
Sample image with 8x8

Small image Image intensity values


sample
Step-1: Find the frequencies of each pixel

Image details

Frequency of pixel values


Step-2 Estimate CDF of each pixel intensity
Step-3 Compute h(v) for each pixel intensity

Min = 52

Max = 154

cdfmin is the minimum non-zero value of the cumulative distribution


function (in this case 1), M × N gives the image's number of pixels (for
the example above 64, where M is width and N the height) and L is the
number of grey levels used (in most cases, like this one, 256).
Histogram equalization
New min. value = 0, old min. value 52
New max. value = 255, old max. value 154

Original Equalized
Histogram equalization

The same image after


An unequalized image
histogram equalization
Linear filtering

A linear filter is a mathematical operation that modifies
an image by changing the signal's frequency spectrum.

It's a powerful image enhancement tool that's used to
smooth images, remove noise, and detect edges.

Neighborhood operators can be used to filter images to
add soft blur, sharpen details, accentuate edges, or
remove noise.

In this section, we look at linear filtering operators,
which involve fixed weighted combinations of pixels in
small neighborhoods.
Understanding Neighborhood in Images

In image processing, a "neighborhood" refers to a
group of pixels surrounding a specific pixel.
Correlation filtering

The most widely used type of neighborhood operator
is a linear filter, where an output pixel’s value is a
weighted sum of pixel values within a small
neighborhood N (Figure 3.10).


The entries in the weight kernel or mask h(k, l) are
often called the filter coefficients.

The above correlation operator can be more
compactly notated as g = f⊗h.
Neighborhood filtering

(65x0.1)+(98x0.1)+(123x0.1)+(65x0.1)+(96x0.2)+(115x0.1)+(63x0.1)+(91x0.1)+(107x0.1) = 92
Convolution filtering

A common variant of linear filter is convolution
operator.


where the sign of the offsets in f has been reversed,
This is called the convolution operator g = f∗h,

and h is then called the impulse response function.
Neighborhood operations: Examples

Some neighborhood operations: (a) original image; (b) blurred; (c)


sharpened; (d) smoothed with edge-preserving filter;
Neighborhood operations: Examples

Some neighborhood operations: (e) binary image; (f) dilated;


(g) distance transform; (h) connected components
Padding (Border Effects)

We notice that the correlation and convolution operation
produces a result that is smaller than the original image,
which may not be desirable in many applications.

This is because the neighborhoods of typical correlation
operations extend beyond the image boundaries near the
edges, and so the filtered images suffer from boundary
effects.

To deal with this, a number of different padding or
extension modes have been developed for neighborhood
operations.
Border Effects: Solutions

Zero: Set all pixels outside the source image to 0 (a good
choice for alpha-matted cutout images);

Constant (border color): Set all pixels outside the source
image to a specified border value;

Clamp (replicate or clamp to edge): Repeat edge pixels
indefinitely;

(Cyclic) wrap (repeat or tile): Loop “around” the image in a
“toroidal” configuration;

Mirror: Reflect pixels across the image edge;

Extend: Extend the signal by subtracting the mirrored version
of the signal from the edge pixel value.
Border Effects: Solutions
Separable Filtering

The process of performing a convolution requires K2
(multiply-add) operations per pixel, where K is the
size (width or height) of the convolution kernel.

This operation can be significantly speed up by first
performing a 1D horizontal convolution followed by
a 1D vertical convolution, which requires a total of
2K operations per pixel.

A convolution kernel for which this is possible is
said to be separable.
Separable Filtering
Example Filters
1)Moving average or box filter: Averages the pixel
values in a K × K window.
Box filter: Example
Example Filters
2) Bilinear (Bartlett) filter: A smoother image can
be obtained by separably convolving the image with a
piecewise linear “tent” function.

A 3x3 version of bilinear filter is shown below.
Bilinear filter: Example
Example Filters
3) Gaussian filter: Convolving the linear tent
function with itself yields the cubic approximating
spline, which is called the “Gaussian” kernel.

A 3x3 version of Gaussian filter is shown below.
Gaussian filter: Example
Example Filters
4) Sobel filter: Linear filtering can also be used as a
pre-processing stage to edge extraction and interest
point detection.

A 3x3 version of Sobel filter is shown below.
Sobel filter: Example
Example Filters
5) Corner detector: The simple corner detector
looks for simultaneous horizontal and vertical second
derivatives.

A 3x3 version of Corner detector is shown below.
Band-pass and steerable filters

More sophisticated kernels can be created by first
smoothing the image with a Gaussian filter.


Such filters are known collectively as band-pass
filters, since they filter out both low and high
frequencies.

The (undirected) second derivative of a two-
dimensional image,
Band-pass and steerable filters

This is known as the Laplacian operator.

Blurring an image with a Gaussian and then taking
its Laplacian is equivalent to convolving directly
with the Laplacian of Gaussian (LoG) filter.
Laplacian of Gaussian Filter (LoG)
Second order steerable filter

You might also like