CH 3
CH 3
CH 3
Natural images can be degraded when they are acquired due to:
Poor contrast due to poor illumination or finite sensitivity of the imaging device.
Electronic sensor noise or atmospheric disturbances leading to broadband noise.
Aliasing effects due to inadequate sampling.
Finite aperture effects or motion leading to spatial errors.
Lighting condition,
Sensor resolution and quality,
Limitation or noise of optical system.
The principal objective of image enhancement is to process a given image so that the result is
more suitable than the original image for a specific application. It sharpens image features such
as edges, boundaries, or contrast to make a graphic display more helpful for display and analysis.
The primary condition for image enhancement is that the information that you want to extract,
emphasize or restore must exist in the image. Fundamentally, ‘you cannot make something out of
nothing’ and the desired information must not be totally swamped by noise within the image.
Perhaps the most accurate and general statement we can make about the goal of image
enhancement is simply that the processed image should be more suitable than the original one for
the required task or purpose. This makes the evaluation of image enhancement, by its nature,
rather subjective and, hence, it is difficult to quantify its performance apart from its specific
domain of application. An image enhancement algorithm makes such degraded images visually
better perceived. There are various and simple algorithms for image enhancement based on
lookup tables Contrast enhancement and other algorithms also work with simple linear filtering
methods Noise removal.
Gizachew M. Page 1
Debark University Department of computer science Computer vision and image processing
Contrast
Contrast generally refers to the difference in luminance or grey level values in an image and is an
important characteristic. It can be defined as the ratio of the maximum intensity to the minimum
intensity over an image.
Contrast ratio has a strong bearing on the resolving power and detectability of an image. The
larger the ratio, more easy it is to interpret the image.
C=Imax/Imin
Contrast enhancement can effect by a linear and non linear transformation (reading assignment)
Gizachew M. Page 2
Debark University Department of computer science Computer vision and image processing
What is a Histogram?
Gizachew M. Page 3
Debark University Department of computer science Computer vision and image processing
and then show the histogram-equalized image for the given skewed image
4 4 4 4 4
3 4 5 4 3
3 5 5 5 3
3 4 5 4 3
4 4 4 4 4
Step2:select the highest gray level in our case 5 and find the bit that represent 5that highest
gray level >= 2n and 5>=23(5>=8) the bit is 3 we represent (0-------->8).
First we have to calculate the PMF (probability mass function) of all the pixels in this image
that gives the probability of each number in the data set or you can say that it basically gives the
count or frequency of each element. Second we calculate CDF stands for cumulative
distributive function. It is a function that calculates the cumulative sum of all the values that are
calculated by PMF. It basically sums the previous one.
Gizachew M. Page 4
Debark University Department of computer science Computer vision and image processing
6 0 0 1.0 7 7
7 0 0 1.0 7 7
Input pixel 0 1 2 3 4 5 6 7
Output 0 0 0 2 6 7 7 7
pixel value
Finally the skewed input image after histogram equalizations will have
4 4 4 4 4 6 6 6 6 6
3 4 5 4 3 2 6 7 6 2
3 5 5 5 3 2 7 7 7 2
3 4 5 4 3 2 6 7 6 2
4 4 4 4 4 6 6 6 6 6
In Identity transformation, each value of the input image is directly mapped to each other
value of output image. That results in the same input image and output image.
In negative transformation, each value of the input image is subtracted from the L-1 and
mapped onto the output image.
Gizachew M. Page 5
Debark University Department of computer science Computer vision and image processing
For instance the following transition has been done. s = (L – 1) – r. Since the input image of
Einstein is an 8bpp image, so the numbers of levels in this image are 256. Putting 256 in the
equation, we get:
s = 255 – r
So each value is subtracted by 255 and the result image will be produced and the lighter pixels
become dark and the darker picture becomes light. And it results in image negative.
Log Transformations
The general form of the log transformation is done as: s = c * log (1+r)
Where s and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel
intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the
minimum value at least 1.
Power-Law Transformations
There are further two transformation is power law transformations, that include nth power and nth root
transformation. These transformations can be given by the expression:
s=cr^γ
This symbol γ is called gamma, due to which this transformation is also known as gamma
transformation. Variation in the value of γ varies the enhancement of the images. Different
display devices / monitors have their own gamma correction, that’s why they display their
image at different intensity. This type of transformation is used for enhancing images for
different type of display devices. The gamma of different display devices is different. For
example Gamma of CRT lies in between of 1.8 to 2.5 that means the image displayed on CRT is
dark. If the value of the Gamma is large the image becomes dark.
Image Filtering
Gizachew M. Page 6
Debark University Department of computer science Computer vision and image processing
current pixel depends on both itself and surrounding pixels). Hence Filtering is a neighborhood
operation, in which the value of any given pixel in the output image is determined by applying
some algorithm to the values of the pixels in the neighborhood of the corresponding input pixel.
A pixel's neighborhood is some set of pixels, defined by their locations relative to that pixel.
Remove noise
Sharpen contrast
Highlight contours
Detect edges
Gizachew M. Page 7
Debark University Department of computer science Computer vision and image processing
Gizachew M. Page 8
Debark University Department of computer science Computer vision and image processing
Image Filtering
Smoothing spatial filters used for blurring and for noise reduction. Blurring is used in
preprocessing step to remove small details from an image prior to(large) object extraction and
bridge small gaps in line or curves. Noise reduction can be accomplished by blurring with liner
filter or non linear filter.
Gizachew M. Page 9
Debark University Department of computer science Computer vision and image processing
The response of average filtering is simply the average of the pixels contained in the
neighborhood of the filter mask. The output of averaging filter is a smoothed image with reduced
“sharp” transitions in gray level. Noise and edge consist of sharp transition in gray level. Thus
smoothing filter is used for noise reduction; however they have the undesirable side effect that
they blur edges. The average filter works by moving through the image pixel by pixel, replacing
each value with the average value of neighboring pixels, including itself.
1 2 1 1 1 1 1 1
1 1 1
1/16 1\9
2 4 2 1 1 1 1 1
1 1 1
1/9 1 1 1 1 1
1 1 1 1 2 1
1. 2D Average filtering example using a 3 x 3 sampling window: for the shaded pixel value with
keeping border values unchanged
1 4 0 1 3 1
1 4 0 1 3 1
2 2 4 2 2 3
2 2 2 2 1 3
1 0 1 0 1 0
1 2 1 1 1 0
1 2 1 0 2 2
1 2 1 1 1 2
2 5 3 1 2 5
2 2 2 2 2 5
1 1 4 2 3 0
1 1 4 2 3 0
input
Output
Convolution mask follow rule that involves ‘overlap – multiply – add’ with ‘convolution mask’
1 1 1 1 4 0
1/9 with for (1,1)
1 1 1 2 2 4
1 1 1 1 0 1
(1,1)=1/9[(1x1)+(1x4)+(1x0)+(1x2)+(1x2)+(1x4)+(1x1)+(1x0)+(1x1)]=1+4+2+2+4+1+1=15/9=
1.66~2
Gizachew M. Page 10
Debark University Department of computer science Computer vision and image processing
(1,2)=
1 1 1 4 0 1
1 1 1 2 4 2
1/9 with
1 1 1 0 1 0
Input output
Gizachew M. Page 11
Debark University Department of computer science Computer vision and image processing
Gizachew M. Page 12
Debark University Department of computer science Computer vision and image processing
Gizachew M. Page 13
Debark University Department of computer science Computer vision and image processing
The principal objective of sharpening is to highlight fine detail in an image or to enhance detail
that has been blurred, either in error or as a natural effect of a particular method of image
acquisition. Uses of image sharpening range from electronic printing and medical imaging to
industrial inspection and autonomous guidance in military systems.
High spatial frequency component which have detailed information in the form of edge and
boundaries and should be extracted. Image sharpening algorithms are used to separate object
outline. Therefore image sharpening filter also called edge enhancement or edge crispening
algorithm. The image blurring is accomplished in the spatial domain by pixel averaging in a
neighborhood; it is the process of integration and Sharpening could be accomplished by spatial
differentiation (to find the difference by neighborhood). Thus, image differentiation enhances
edges and other discontinuities (such as noise) and de-emphasizes areas with slowly varying
intensities or emphasizes transition in image intensity. Smoothing is often referred to as low-
pass filtering, In a similar manner, sharpening is often referred to as high-pass filtering. In this
case, high frequencies (which are responsible for fine details) are passed, while low frequencies
are attenuated or rejected.
Derivative operator: this operator calculate the gradient of the image intensity at each point, by
gives direction of the largest possible increase from light to dark and the rate of changes in that
direction.
Gradient Filter
Edges can be extracted by taking the gradient of the image. Gradient refers to the difference
between the pixels of an image
If neighbouring pixel have the same intensity, the difference is zero and hence there is no
edge.
Edges exits when there is a significant local intensity variation.
There are two ways to apply sharpening filters that are based on first- and second-order
derivatives.
Derivatives of a digital function are defined in terms of differences. There are various ways to
define these differences. However, we require that any definition we use for a first derivative:
1. Must be zero in areas of constant intensity.
2. Must be nonzero at the onset of an intensity step or ramp.
3. Must be nonzero along intensity ramps.
Gizachew M. Page 14
Debark University Department of computer science Computer vision and image processing
•Fourier Series: Any function that periodically repeats itself can be expressed as the sum of
sines/cosines of different frequencies, each multiplied with a different coefficient.
In the frequency domain, a digital image is converted from spatial domain to frequency domain.
In the frequency domain, image filtering is used for image enhancement for a specific
application. A Fast Fourier transformation is a tool of the frequency domain used to convert the
Gizachew M. Page 15
Debark University Department of computer science Computer vision and image processing
spatial domain to the frequency domain. For smoothing an image, low filter is implemented and
for sharpening an image, high pass filter is implemented. When both the filters are implemented,
it is analyzed for the ideal filter, Butterworth filter and Gaussian filter. The frequency domain is
a space which is defined by Fourier transform.
Fourier transformation is a tool for image processing. It is used for decomposing an image into
sine and cosine components. The input image is a spatial domain and the output is represented in
the Fourier or frequency domain. Fourier transformation is used in a wide range of application
such as image filtering, image compression. Image analysis and image reconstruction etc
Frequency domain.
Gizachew M. Page 16
Debark University Department of computer science Computer vision and image processing
Transformation
A signal can be converted from time domain into frequency domain using mathematical
operators called transforms. There are many kind of transformation that does this. Some of them
are given below.
Fourier Series
Fourier transformation
Laplace transform
Z transform
Frequency components
Any image in spatial domain can be represented in a frequency domain. But what do these
frequencies actually mean. We divide frequency components into two major components.
Gizachew M. Page 17
Debark University Department of computer science Computer vision and image processing
It occurs often in pure and applied mathematics, as well as physics, engineering, signal
processing and many other fields
Some Basic Properties of the Frequency Domain: Frequency is directly related to the rate of
change. Therefore, slowest varying component (u=v=0) corresponds to the average intensity
level of the image. Corresponds to the origin of the Fourier Spectrum. Higher frequencies
corresponds to the faster varying intensity level changes in the image. The edges of objects or the
other components characterized by the abrupt changes in the intensity level corresponds to higher
frequencies.
Gizachew M. Page 18
Debark University Department of computer science Computer vision and image processing
Given the filter H(u,v) (filter transfer function) in the frequency domain, the Fourier transform of
the output image (filtered image) is given by:
G(u,v) H(u,v)F(u,v) Step (3)
The filtered image g(x,y) is simply the inverse Fourier transform of G(u,v).
g(x, y) 1G(u,v) Step (4)
Gizachew M. Page 19
Debark University Department of computer science Computer vision and image processing
Gizachew M. Page 20