The document discusses histogram equalization and linear filters for image processing. It introduces histogram equalization as a method to enhance image contrast by transforming the histogram to a uniform distribution. It then discusses linear filters, describing them as matrices that are convolved with an image to compute weighted averages and filter the image. Specific linear filters discussed include mean filters for smoothing and approximations of derivatives like the Laplacian filter for edge detection.
The document discusses histogram equalization and linear filters for image processing. It introduces histogram equalization as a method to enhance image contrast by transforming the histogram to a uniform distribution. It then discusses linear filters, describing them as matrices that are convolved with an image to compute weighted averages and filter the image. Specific linear filters discussed include mean filters for smoothing and approximations of derivatives like the Laplacian filter for edge detection.
The document discusses histogram equalization and linear filters for image processing. It introduces histogram equalization as a method to enhance image contrast by transforming the histogram to a uniform distribution. It then discusses linear filters, describing them as matrices that are convolved with an image to compute weighted averages and filter the image. Specific linear filters discussed include mean filters for smoothing and approximations of derivatives like the Laplacian filter for edge detection.
The document discusses histogram equalization and linear filters for image processing. It introduces histogram equalization as a method to enhance image contrast by transforming the histogram to a uniform distribution. It then discusses linear filters, describing them as matrices that are convolved with an image to compute weighted averages and filter the image. Specific linear filters discussed include mean filters for smoothing and approximations of derivatives like the Laplacian filter for edge detection.
The histogram of the values of a grayscale image helps us understand the contrast of the image. The goal of histogram equalization is to enhance the image contrast by transforming the histogram to match up to a uniform (flat) distribution.
Matlab has built-in functions for histogram equalization (histeq) and displaying image histograms (imhist). The code below loads the built-in image pout.tif and displays its histogram. A = imread('pout.tif'); subplot(211); imshow(A); subplot(212); imhist(A); Use the histeq function to perform histogram equalization on the pout.tif image. Display the original image next to the equalized image, with the corresponding histograms below. Use the subplot command to put all 4 pictures into one figure.
2.) Thought Exercises a..) When checking the contrast of an image, it is important to use the imshow command and not the imagesc command. Why?
b.) In theory, the histogram equalization transfer function should produce a uniform histogram. But in practice, the histogram of the equalized image is not exactly flat. Why?
PCMI Undergraduate Summer School Discussion 2 Linear Filters Luminita Vese & Todd Wittman
A linear filter is a MxN matrix W with M & N odd that defines a set of weights. To apply the filter W to an image f, at each pixel f(x,y) we center the window W over the pixel (x,y). Then looking at the neighborhood defined by the overlap, we compute the weighted average defined by the weights of W. This weighted average then becomes the new value g(x,y) of the filtered image. Doing this at every pixel, we obtain a new image g which is the same size as the original image f. An example is shown below for a 3x3 window W. We compute the new value g 13
corresponding to the pixel marked f 13 .
g 13 = w 1 f 7 + w 2 f 8 + w 3 f 9 + w 4 f 12 + w 5 f 13 + w 6 f 14 + w 7 f 17 + w 8 f 18 + w 9 f 19
We keep sliding the window W around the image, computing the weighted average at each pixel. Applying the filter at the image boundaries is a little tricky, as the window W will overlap outside the image domain. Some people like to pretend the image f=0 outside the image (zero padding). Another approach is to copy the nearest pixel value of f (reflective boundary conditions).
Generally, we want the weights of our filter matrix W to sum to one. Otherwise, this will alter the overall contrast of the image (make the image lighter or darker).
1.) Smoothing Linear Filters The mean filter (also called box filter) is a NxN matrix with equal weights, with the the total weight of the matrix summing to one. The 3x3 mean filter looks like
To create the mean filter above in Matlab, we can use the ones command which generates a matrix of all 1's of the specified size. W = 1/9 * ones(3,3) To apply this filter to the image cameraman.tif, type f = imread('cameraman.tif'); f = double(f); g = conv2(f,W,'same'); The Matlab command conv2 performs a 2D convolution, which is a concept we will learn about in lecture soon. For now, the word "convolution" is the process of applying a linear filter to a given image.
Try applying a 3x3, 5x5, and 7x7 mean filter to the cameraman image. What happens as the filter size increases?
The mean filter is an example of a smoothing operation. To see this effect, let's add some random Gaussian noise to the image using the imnoise technique. Unlike most Matlab operations, imnoise wants an 8-bit image as input. f = uint8(f); f2 = imnoise(f); You should see the image f2 now contains noise points. What happens to the noise points after you apply the mean filter?
2.) Other Linear Filters Try applying the following 3x3 linear filters to the image cameraman.tif. For each filter, qualitatively describe the effect.
3.) Filters as Derivatives Recall the definition of the one-dimensional derivative is
But on images, we are restricted to the pixel size so h=1 pixel is the smallest we can go. So one approximation of the derivative in the x-direction is given by the forward difference f x = f(x+1,y) - f(x,y). We can compute the derivative at each point in our image by applying the linear filter:
Try applying this filter to your the cameraman image. The value of f x should be large where there is a jump in the x-direction. In other words, this filter should pop out the vertical edges in the image.
Alternatively, we could use the backward difference of central difference to approximate the derivative:
To approximate the second derivative f xx we use the filter:
(One of your homework problems is to prove that this filter approximates the second derviative using a Taylor expansion.)
The filters for approximating derivatives in the y-direction are similar. Putting together the filters for f xx and f yy , write a linear filter that approximates the Laplacian . Apply the Laplacian filter to the cameraman image. You should find that the Laplacian is a edge detector.