0% found this document useful (0 votes)
25 views69 pages

5 Chapter3part1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views69 pages

5 Chapter3part1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

BIM 33203

IMAGE PROCESSING
CHAPTER 3: IMAGE ENHANCEMENT AND
RESTORATION (PART 1)

ALPINE SKI HOUSE


OUTLINE

▪ Histogram
▪ Histogram equalization
▪ Histogram matching
▪ Filtering
▪ Spatial filtering
▪ Spatial correlation and convolution
▪ Smoothing spatial filters
▪ Smoothing linear filter
▪ Order-statistic filters
▪ Median filtering
ALPINE SKI HOUSE 2
IMAGE ENHANCEMENT

▪ Enhancement is the process of manipulating/adjusting


digital images so that the output results are more suitable
for display or further analysis
▪ Application specific >> most of the time, it establishes at
the outset (beginning) and enhancement techniques are
problem oriented
▪ This makes the evaluation of image enhancement (by
nature) rather subjectively hence, it is difficult to quantify
its performance apart from its specific domain of
application

3
IMAGE ENHANCEMENT
▪ Some of the reasons for enhancement include:
➢ Highlighting interesting detail in images
➢ Removing noise from images
➢ Making images more visually appealing

4
IMAGE ENHANCEMENT

▪ Primary condition/policy/rule for image


enhancement:
➢ The information that you want to extract,
emphasize or restore must exist in the image
➢ ‘You cannot make something out of nothing’
➢ The desired information must not be totally
swamped by noise within the image

5
HISTOGRAM
▪ Histograms are the basis for numerous spatial domain processing
techniques
▪ Histogram manipulation can be used for image enhancement
▪ In addition to providing useful image statistics, the information
inherent in histograms also is quite useful in other image
processing applications/domains, such as image compression and
segmentation
▪ Histograms are simple to calculate (in software) and also lend
themselves to economic hardware implementations, thus making
them a popular tool for real-time image processing

6
HISTOGRAM
▪ The histogram of a digital image with intensity levels in
the range [0, L-1] is a discrete function h (rk) = nk where:
➢ rk is the kth intensity value and
➢ nk is the number of pixels in the image with intensity rk
▪ It is common practice to normalize a histogram by
dividing each of its components by the total number of
pixels in the image, denoted by the product MN where,
➢ M and N are the row and column (dimensions) of the
image

7
HISTOGRAM
▪ Thus, a normalized histogram is an estimate of the
probability of occurrence of intensity level in an image
▪ The sum of all components of a normalized histogram is
equal to 1
Low contrast Global equalization Local equalization

8
HISTOGRAM

▪ For a grayscale image, the histogram can be constructed by


simply counting the number of times each grayscale value (0
– 255) occurs within the image
▪ Each bin (of grayscale value) within the histogram is
incremented each time its value is encountered thus an image
histogram can easily be constructed as follows:

initialize all histogram array entries to 0


for each pixel I(i, j) within the image I
histogram(I(i, j)) = histogram(I(i, j)) + 1
end
9
HISTOGRAM

10
HISTOGRAM

▪ As an introduction to histogram processing for intensity


transformations, consider the following figures, which is the
pollen image shown in four basic intensity characteristics:
dark, light, low contrast, and high contrast
▪ The right side of the figure shows the histograms
corresponding to these images
▪ The horizontal axis of each histogram plot corresponds to
intensity values, rk
▪ The vertical axis corresponds to values of h(rk) = nk or p(rk)
= nk /MN if the values are normalized
11
HISTOGRAM

Four basic image


types and their
corresponding
histograms: (a)
Dark; (b) Light;

12
HISTOGRAM

(c) Low contrast; and


(d) High contrast.

13
HISTOGRAM
▪ In the dark image - the components of the histogram are
concentrated on the low (dark) side of the intensity scale
▪ On the contrary, the components of the histogram of the light
image are biased toward the high side of the scale
▪ An image with low contrast has a narrow histogram located
typically toward the middle of the intensity scale
▪ For a monochrome image this implies a dull, washed-out gray-
ish look
▪ The components of the histogram in the high-contrast image
cover a wide range of the intensity scale and, further, that the
distribution of pixels is not too far from uniform, with very few
vertical lines being much higher than the others 14
HISTOGRAM
▪ Intuitively, it is reasonable to conclude that an image with pixels
tend to occupy the entire range of possible intensity levels and, in
addition, tend to be distributed uniformly, will have an appearance
of high contrast and will exhibit a large variety of gray tones
▪ The net effect will be an image that shows a great deal of gray-
level detail and has high dynamic range
▪ It will be shown shortly that it is possible to develop a
transformation function that can automatically achieve this effect,
based only on information available in the histogram of the input
image

15
HISTOGRAM EQUALIZATION

▪ Histogram equalization is a technique for adjusting


image intensities to enhance contrast
▪ One of the most commonly used image enhancement
techniques
▪ The major benefits of histogram equalization are that:
➢ it is a fully automatic procedure; and
➢ is computationally simple to perform

16
HISTOGRAM EQUALIZATION
▪ The probability of occurrence of graylevel rk ,where:
𝑛𝑘
𝑝𝑟 𝑟𝑘 = k = 0, 1, 2, …, L-1
𝑛
▪ This equation is synonymous to histogram equalization (or
sometimes referred to as linearization)
▪ A processed (output) image is obtained by mapping each pixel
with level rk in the input image into a corresponding pixel with
level sk in the output image, where:
➢ nk : number of pixels that have graylevel k ,
➢ n : total number of pixels in the image
▪ This technique automatically determines a transformation
function that seeks to produce an output image that has a
uniform histogram 17
HISTOGRAM MATCHING (SPECIFICATION)

▪ Sometimes, attempting to base enhancement on a uniform


histogram is not the best approach
▪ Therefore, it is useful if we could be able to specify the
shape of the histogram that we wish the processed image
to have
▪ The method used to generate a processed image that has a
specified histogram is called histogram matching or
histogram specification

18
HISTOGRAM MATCHING
▪ Let pr(r) and pz(z) denote their corresponding continuous probability
density functions
▪ In this notation, r and z denote the graylevels of the input and output
(processed) images, respectively
▪ pr(r) can be estimated from the given input image, while pz(z) is the
specified probability density function that we wish the output image to
have
▪ Let s be a random variable with the property of:

Eq. 1

where, w is a dummy variable of the integration

19
HISTOGRAM MATCHING
▪ Next, we define a random variable, z with the property of:
Eq. 2

where t is a dummy variable of the integration


▪ It then follows from these two equations that G(z) = T(r) and
therefore, z must satisfy the condition of:
Eq. 3

20
HISTOGRAM MATCHING

▪ An image whose intensity levels have a specified


probability density function can be obtained from a given
image by using the following procedure:
➢ Obtain the histogram of the input image, pr(r)
➢ Use Eq. 1 to obtain the transformation function T(r)
➢ Use Eq. 2 to obtain the transformation function G(z)
➢ Obtain the inverse transformation function G-1
➢ Obtain the output image by applying Eq. 3 to all the
pixels in the input image
21
COMPARISON BETWEEN HISTOGRAM
EQUALIZATION AND HISTOGRAM MATCHING
▪ The image is (a)
dominated by
large, dark areas, (b)
resulting in a
histogram
characterized by a
large
concentration of
pixels in the dark
end of the
grayscale

(a) Image of Phobos (Mars moon) taken by NASA’s Mars Global


Surveyor.
(b) Histogram of (a).
22
Histogram equalization
(a)
(b)

(a) Transformation
function for
histogram
equalization.
(b) Histogram-
equalized image
(with washed-out
appearance).
(c) Histogram of (b).

(c) 23
(a)
(c)

Histogram
matching

(b)
(a) Specified
histogram.
(b) Transformations.
(c) Enhanced image
using mappings
from curve (2).
(d) Histogram of (c).
(d)

24
LOCAL HISTOGRAM PROCESSING
▪ The histogram processing methods discussed in the previous two
sections are global, i.e. the pixels are modified by a transformation
function based on the intensity distribution of an entire image
▪ Although this global approach is suitable for overall enhancement,
there are cases in which it is necessary to enhance details over
small areas in an image
▪ The number of pixels in these areas may have negligible influence
on the computation of a global transformation, whose shape does
not necessarily guarantee the desired local enhancement
▪ The solution is to devise (create) transformation functions based on
the intensity distribution in a neighbourhood of every pixel in the
image
25
26
ENHANCEMENT VIA IMAGE FILTERING
▪ The removal of noise, the sharpening of image edges and the ‘soft
focus’ (blurring) effect can be achieved through the process of
spatial domain filtering
▪ The name filter is borrowed from frequency domain processing,
where “filtering” refers to accepting (passing) or rejecting certain
frequency components
▪ For example, a filter that passes low frequencies is called a
lowpass filter. The net effect produced by a lowpass filter is to
blur (smooth) an image
▪ We can accomplish a similar smoothing directly on the image itself
by using spatial filters (also called spatial masks, kernels,
templates, or windows)
27
FILTER
▪ Filters act on an image to change the values of the pixels in some
specified way and are generally classified into two types: linear and
nonlinear
▪ Linear filters are more common, but examples of both kinds will be
discussed later
▪ Irrespective of the particular filter that is used, all approaches to
spatial domain filtering operate in the same simple way
▪ Each of the pixels in an image – the pixel under consideration at a
given moment is termed the target pixel – is successively addressed
▪ The value of the target pixel is then replaced by a new value which
depends only on the value of the pixels in a specified neighbourhood
around the target pixel
28
THE MECHANICS OF SPATIAL FILTERING
▪ The spatial filter consists of:
➢ Neighborhood - (typically a small rectangle)
➢ Predefined operation - that is performed on the image pixels by the
neighborhood
▪ Filtering creates a new pixel with coordinates equal to the coordinates
of the center of the neighborhood , and whose value is the result of
filtering operation
▪ A processed (filtered) image is generated as the center of the filter
visits each pixel in the input image
▪ Spatial Filtering:
➢ Linear - the operation performed on the image pixels is linear
➢ Nonlinear – operation that is opposite the above method
29
THE MECHANICS OF SPATIAL FILTERING
▪ The specific linear combination of the neighbouring pixels that is taken is
determined by the filter kernel (often called a mask)
▪ This is just an array/sub-image of exactly the same size as the
neighbourhood1 containing the weights that are to be assigned to each of
the corresponding pixels in the neighbourhood of the target pixel
▪ Filtering proceeds by successively positioning the kernel so that the location
of its centre pixel coincides with the location of each target pixel, each time
the filtered value being calculated by the chosen weighted combination of
the neighbourhood pixels
▪ This filtering procedure can thus be visualized as sliding the kernel over all
locations of interest in the original image, I. Multiplying the pixels
underneath the kernel by the corresponding weights, w, calculating the new
values by summing the total and copying them to the same locations in a new
(filtered) image, f
30
Following figure
illustrates the
mechanics of
linear spatial
filtering using a
3x3
neighborhood.

31
THE MECHANICS OF SPATIAL FILTERING
▪ At any point (x, y) in the image, the response, g(x, y), of the
filter is the sum of products of the filter coefficients and the
image pixels encompassed by the filter:

▪ Observe that the center coefficient of the filter, w(0, 0), aligns
with the pixel at location (x, y)
▪ For a mask of this size, we assume that m = 2a +1 and n =
2b+1 where a and b are positive integers
32
THE MECHANICS OF SPATIAL FILTERING
▪ This means that our focus in the following discussion is on filters of
odd size, with the smallest being of size 3 x 3
▪ In general, linear spatial filtering of an image of size M X N with
a filter of size m x n is given by the expression:

where x and y are varied so that each pixel in w visits every


pixel in f

33
SPATIAL CORRELATION AND
CONVOLUTION
▪ Two closely related concepts that must be
understood clearly when performing linear spatial
filtering are correlation and convolution
▪ Correlation is the process of moving a filter mask
over the image and computing the sum of products
at each location
▪ The mechanics of convolution are the same, except
that the filter is first rotated by 180°

34
Illustration of 1-D
correlation and
convolution of a
filter with a
discrete unit
impulse.
*Description of this
figure is explained in
the next slide.

35
▪ Figure (a) shows a 1-D function, f, and a filter, w , and Fig. (b) shows the starting
position to perform correlation
▪ The first thing we note is that there are parts of the functions that do not overlap
▪ The solution to this problem is to pad f with enough 0s on either side to allow
each pixel in to visit every pixel in f
▪ If the filter is of size m, we need m-1 0s on either side of f
▪ Figure (c) shows a properly padded function
▪ The first value of correlation is the sum of products of f and w for the initial
position shown in Fig. (c) (the sum of products is 0)
▪ This corresponds to a displacement x = 0. To obtain the second value of
correlation, we shift one pixel location to the right (a displacement of x = 1) and
compute the sum of products
▪ The result again is 0
▪ In fact, the first nonzero result is when , in which case the 8 in w overlaps the 1 in
f and the result of correlation is 8
▪ Proceeding in this manner, we obtain the full correlation result in Fig. (g)
36
Correlation
(middle row)
and
convolution
(last row) of a
2-D filter with
a 2-D discrete,
unit impulse.
The 0s are
shown in gray
to simplify
visual analysis.

*Description of this figure


is explained in the next
slide.
37
▪ The preceding concepts extend easily to images, as figure above
demonstrate
▪ For a filter of size m x n we pad the image with a minimum of m
– 1 rows of 0s at the top and bottom and n – 1 columns of 0s on
the left and right
▪ In this case, m and n are equal to 3, so we pad f with two rows of
0s above and below and two columns of 0s to the left and right,
as Figure (b) illustrate
▪ Figure (c) displays the initial position of the filter mask for
performing correlation, and Figure (d) denotes the full correlation
result
▪ Figure (e) shows the corresponding cropped result. Note again
that the result is rotated by 180°
▪ For convolution, we pre-rotate the mask as before and repeat the
sliding sum of products just explained
▪ Figures (f) through (h) denote the result
38
SPATIAL CORRELATION AND
CONVOLUTION
▪ The correlation of a filter w(x, y) of size m x n with an image
f(x, y) is given by the following equation:

Eq. 4

▪ This equation is evaluated for all values of the displacement


variables x and y so that all elements of w visit every pixel in
f, where we assume that f has been padded appropriately
▪ As explained earlier, a = (m - 1)>2, b = (n - 1)>2, and we
assume for notational convenience that m and n are odd
integers

39
SPATIAL CORRELATION AND
CONVOLUTION
▪ The convolution of w(x, y) and f(x, y), † is given by the
expression:

Eq. 5

where the minus signs on the right flip f (i.e., rotate it by


180°). Flipping and shifting f instead of w is done for
notational simplicity and also to follow convention.

40
SMOOTHING SPATIAL FILTERS
▪ Smoothing filters are used for blurring and for
noise reduction
▪ Blurring is used in pre-processing tasks, such as
removal of small details from an image prior to
(large) object extraction, and bridging of small
gaps in lines or curves
▪ Noise reduction can be accomplished by blurring
with a linear filter and also by nonlinear filtering

41
SMOOTHING LINEAR FILTERS
▪ The output (response) of a smoothing, linear spatial filter is
simply the average of the pixels contained in the
neighbourhood of the filter mask
▪ These filters sometimes are called averaging/mean filters
▪ They also are referred to a low pass filters
▪ The idea behind smoothing filters is straightforward:
➢ By replacing the value of every pixel in an image by the
average of the intensity levels in the neighbourhood
defined by the filter mask, this process results in an image
with reduced ‘sharp’ transitions in intensities

42
SMOOTHING LINEAR FILTERS
▪ Since random noise typically consists of sharp transitions in intensity
levels, the most obvious application of smoothing is noise reduction
▪ However, edges (which almost always are desirable features of an
image) also are characterized by sharp intensity transitions, so
averaging filters have the undesirable side effect that they blur
edges too
▪ Another application of this type of process includes the smoothing of
false contours that result from using an insufficient number of
intensity levels
▪ A major use of averaging filters is in the reduction of ‘irrelevant’
detail in an image
➢ ‘Irrelevant’ means pixel regions that are small with respect to the
size of the filter mask
43
SMOOTHING LINEAR FILTERS
▪ Two sample of widely used smoothing filters. The first filter yields
the standard average of the pixels under the mask. Second filter
yields weighted average results i.e. giving more importance (weight)
to some pixels at the expense of the other

44
SMOOTHING LINEAR FILTERS
▪ First mask – Figure 3.32(a):
➢ Instead of being 1/9 , the coefficients of the filter are all 1s
➢ The idea here is that it is computationally more efficient to
have coefficients valued 1
➢ At the end of the filtering process the entire image is divided
by 9
➢ An m x n mask would have a normalizing constant equal to
1/mn
➢ A spatial averaging filter in which all coefficients are equal
sometimes is called a box filter
45
SMOOTHING LINEAR FILTERS
▪ The second mask – Figure 3.32(b):
➢ This mask yields a so-called weighted average, terminology used to
indicate that pixels are multiplied by different coefficients
❖ giving more importance (weight) to some pixels at the expense of
others
➢ The pixel at the center of the mask is multiplied by a higher value than
any other
❖ giving this pixel more importance in the calculation of the average
➢ The other pixels are inversely weighted as a function of their distance
from the center of the mask
➢ The diagonal values are further away from the center than the orthogonal
neighbours (by a factor of √2 ) and are weighed less than the immediate
neighbours of the center pixel
46
SMOOTHING LINEAR FILTERS
▪ A low pass filter allows low spatial frequencies to pass unchanged, but
suppress high frequencies
▪ The low pass filter smoothens or blurs the image
➢ This tends to reduce noise, but also obscures fine detail
▪ The basic strategy behind weighing the center point the highest and then
reducing the value of the coefficients as a function of increasing distance
from the origin is simply an attempt to reduce blurring in the smoothing
process
▪ We could have chosen other weights to accomplish the same general
objective. However, the sum of all the coefficients in the mask of Fig.
3.32(b) is equal to 16, an attractive feature for computer implementation
because it is an integer power of 2
47
SMOOTHING LINEAR FILTERS
▪ The following shows a 3 x 3 kernel for performing a low-pass filter
operation. Each element in the kernel has a value of 1. The output pixel is
just the simple average of the input neighborhood pixels

48
SMOOTHING LINEAR FILTERS
▪ One of the primary uses of both linear and nonlinear
filtering in image enhancement is for noise removal

(a) Original image; (b) With additional ‘salt and pepper’


noise; and (c) Additional Gaussian noise.
49
SMOOTHING LINEAR FILTERS

Application of mean filter of (3 x 3) to the: (a) Original;


(b) Original image with ‘salt and pepper’ noise; and
(c) Gaussian noise.
50
SMOOTHING LINEAR FILTERS

▪ We can see that the mean filtering is reasonably effective at


removing the Gaussian noise (Figure c), but at the expense
of a loss of high-frequency image detail (i.e. edges)
▪ Although a significant portion of the Gaussian noise has
been removed (after implementation of mean filtering), it
(noise) is still visible within the image
▪ Larger kernel sizes will further suppress the Gaussian noise
but will result in further degradation of image quality

51
SMOOTHING LINEAR FILTERS
Effect of Filter Size

Original 7x7 15 x 15 41x 41

▪ It is also apparent that mean filtering is not effective for the


removal of ‘salt and pepper’ noise
▪ In the case of ‘salt and pepper’ noise, the noisy high/low pixel
values act as outliers in the distribution
▪ For this reason, ‘salt and pepper’ noise is best dealt with a
measure that is robust to statistical outliers
52
SMOOTHING LINEAR FILTERS
▪ In summary, the main drawbacks of smoothing linear filtering are:
➢ It is not robust to large noise deviations in the image (outliers)
➢ When the mean filter straddles an edge in the image, it will
cause blurring
▪ Both problems are tackled by the median filter, which is often a
better filter for reducing noise than the mean filter, but it takes
longer (time) to compute
▪ In general, the mean filter acts as a lowpass frequency filter and
therefore, reduces the spatial intensity derivatives present in the
image

53
ORDER-STATISTIC FILTERS
▪ Order-statistic filters are nonlinear spatial filters
▪ The response is based on ordering (ranking) the pixels
contained in the image area encompassed by the filter, and
then replacing the value of the center pixel with the value
determined by the ranking result
▪ The best-known filter in this category is the median filter
➢ As its name implies, it replaces the value of a pixel by the
median of the intensity values in the neighborhood of that
pixel (the original value of the pixel is included in the
computation of the median)

54
ORDER-STATISTIC FILTERS
▪ Median filters are traditionally quite popular
➢ They provide excellent noise-reduction capabilities, with
considerably less blurring than linear smoothing filters of
similar size
▪ Median filters are particularly effective in the presence of
impulse noise, also called salt-and-pepper noise (because of its
appearance as white and black dots superimposed on an
image)
▪ Median filtering overcomes the main limitations of the mean
filter, at the expense of greater computational cost

55
MEDIAN FILTERING
▪ As each pixel is addressed, it is replaced by the statistical
median of its M x N neighbourhood rather than the mean
▪ The median filter is superior to the mean filter in that it is better
at preserving sharp high-frequency detail (i.e. edges) whilst also
eliminating noise, especially isolated noise spikes (such as ‘salt
and pepper’ noise)

56
MEDIAN FILTERING
▪ The median m of a set of numbers is that number for which half of
the numbers are less than m and half are greater
➢ It is the midpoint of the sorted distribution of values
▪ As the median is a pixel value drawn from the pixel neighbourhood
itself, it is more robust to outliers and does not create a new
unrealistic pixel value
▪ This helps in preventing edge blurring and loss of image detail
▪ By definition, the median operator requires an ordering of the values
in the pixel neighbourhood at every pixel location
▪ This increases the computational requirement of the median operator

57
MEDIAN FILTERING
▪ The median filter considers each pixel in the image in turn and
looks at its nearby neighbors to decide whether or not it is
representative of its surroundings
▪ Instead of simply replacing the pixel value with the mean of
neighboring pixel values, it replaces it with the median of those
values
▪ The median is calculated by first sorting all the pixel values from
the surrounding neighborhood into numerical order and then
replacing the pixel being considered with the middle pixel
value. (If the neighborhood under consideration contains an even
number of pixels, the average of the two middle pixel values is
used.)
58
MEDIAN FILTERING
Calculating the median value of a pixel neighborhood. As can be seen, the
central pixel value of 150 is rather unrepresentative of the surrounding
pixels and is replaced with the median value, which is 124. A 3×3 square
neighborhood is used here. Larger neighborhoods will produce more severe
smoothing.

59
MEDIAN FILTERING EXAMPLE
▪ The following example shows the application of a median filter to a
simple one dimensional signal
▪ A window size of three is used, with one entry immediately
preceding and following each entry

For y[1] and y[10], extend the left-most or right most value outside the boundaries of the
image same as leaving left-most or right most value unchanged after 1-D median
60
MEDIAN FILTERING
▪ In previous example, because there is no entry preceding for the first value,
the first value is then repeated (as is the last value) to obtain enough entries
to fill the window
▪ What effect does this have on the boundary values?
▪ There are other approaches that have different properties that might be
preferred in particular circumstances such as:
➢Avoid processing the boundaries, with or without cropping the signal or
image boundary afterwards
➢Fetching entries from other places in the signal. With images for 4 example,
entries from the far horizontal or vertical boundary might be selected
➢Shrinking the window near the boundaries, so that every window is full
▪ What effects might these approaches have on the boundary values?

61
MEDIAN FILTERING
▪ On the left is an image containing a significant amount of salt and
pepper noise. On the right is the same image after processing with a
median filter median filter.

▪ Notice the well-preserved edges in the output image. There is some


remaining noise on the boundary of the image. Why is this?
62
MEDIAN FILTERING EXAMPLE 2
2D Median filtering example using a 3 x 3 sampling window:

63
MEDIAN FILTERING - BOUNDARIES
2D Median filtering example using a 3 x 3 sampling window:

64
MEDIAN FILTERING - BOUNDARIES
2D Median filtering example using a 3 x 3 sampling window:

65
MEDIAN FILTERING

Input image - (a) Original; (b) ‘Salt and pepper’ noise; and (c) Gaussian noise.

Output image - Median filter (3x3) applied to: (a) Original; (b) ‘Salt and
pepper’ noise; and (c) Gaussian noise. 66
MEDIAN FILTERING
▪ By calculating the median value of a neighbourhood rather
than the mean filter, the median filter has two main
advantages over the mean filter:
➢ The median is a more robust average than the mean, a
single/very unrepresentative pixel in a neighbourhood will
not affect the median value significantly
➢ Since the median value must actually be the value of one
of the pixels in the neighbourhood, the median filter does
not create new unrealistic pixel values when the filter
straddles an edge. For this reason the median filter is much
better at preserving sharp edges than the mean filter

67
What is Image Enhancement?
https://www.youtube.com/watch?v=-
FqfaVkIORI

FURTHER How image enhancement works

INFORMATION https://www.youtube.com/watch?v=XT_rMDM
DEvo

Introduction to image enhancement


https://www.youtube.com/watch?v=moT1Kzd
VR-A

68
THAT’S ALL!
END OF CHAPTER 3
(PART 1)

ALPINE SKI HOUSE

You might also like