0% found this document useful (0 votes)
80 views

Computer Vision Course Lecture 2

Image processing involves changing images to either improve them for human interpretation or make them suitable for machine perception. It includes operations like sharpening edges, removing noise or blur, and extracting edges. Images can be binary, grayscale, or color (RGB). Digital images are arrays of pixel intensities represented as numbers. Common image processing techniques include arithmetic operations, complements, and histograms.

Uploaded by

tarek Magdy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Computer Vision Course Lecture 2

Image processing involves changing images to either improve them for human interpretation or make them suitable for machine perception. It includes operations like sharpening edges, removing noise or blur, and extracting edges. Images can be binary, grayscale, or color (RGB). Digital images are arrays of pixel intensities represented as numbers. Common image processing techniques include arithmetic operations, complements, and histograms.

Uploaded by

tarek Magdy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Computer Vision

Lecture (2)
Image Processing
What is an image?
• An image is a single picture which represents something.
• It may be a picture of a person, of people or animals, or of an
outdoor scene.

Digital Camera
What is Image Processing?

• Image processing involves changing the nature of an image in


order to either:

1. improve its pictorial information for human interpretation.


2. render it more suitable for autonomous machine perception.

• Humans like their images to be sharp, clear and detailed.


• Machines prefer their images to be simple and uncluttered.
Examples of (1)

• Enhancing the edges


of an image to make
it appear sharper.
Examples of (1)

• Removing noise
from an image.

• Noise is random
errors in the
image.
Examples of (1)
• Removing motion blur from an image.
Examples of (2)

• Obtaining the edges


of an image.
• This may be
necessary for the
measurement of
objects in an image.
Examples of (2)

• Removing detail from an


image for measurement or
counting purposes.
• We could measure the
size and shape of the
animal without being
distracted by unnecessary
detail.
Why studying Image Processing?

• The first stage in most computer vision applications is the use of


image processing to preprocess the image and convert it into a
form suitable for further analysis.
• While some may consider image processing to be outside the
purview of computer vision, most computer vision applications,
such as computational photography and even recognition,
require care in designing the image processing stages in order
to achieve acceptable results.
Types of Images

• There are three basic types of images:

1. Binary images.
2. Grayscale images.
3. Color images.
Binary images
• Each pixel is just black or white.
• Since there are only two possible values for each pixel, we only
need one bit per pixel (0 for black and 1 for white).
Grayscale images
• Each pixel can be represented by exactly one byte (8 bits).
• Each pixel is a shade of grey, normally from 0 (black) to 255
(white).
Color images (RGB)
• Each pixel has a particular color; that color being described by the
amount of red, green and blue in it.
• If each of these components has a range from 0 to 255, this gives a
total of 2563 = 16,777,216 different possible colors in the image.
• Since the total number of bits required for each pixel is 3 × 8 = 24,
such images are also called 24-bit color images.
• Such an image may be considered as consisting of a stack of three
matrices; representing the red, green and blue values for each pixel.
• This means that for every pixel there correspond three values.
Color images (RGB)
Digital Image

• An image can be considered as a two dimensional function,


where the function values give the brightness of the image at
any given point.
• A digital image can be considered as a large array of sampled
points, each of which has a particular quantized brightness;
these points are the pixels which constitute the digital image.
• The pixels surrounding a given pixel constitute its neighborhood.
Digital Image

• Two important terms of digital (discrete) images:


➢ Sample means converting the 2D space on a regular grid.
➢ Quantize means rounding each sample to nearest integer.
Digital Image

• A grid (matrix) of intensity values. 255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 20 0 255 255 255 255 255 255 255

255 255 255 75 75 75 255 255 255 255 255 255

255 255 75 95 95 75 255 255 255 255 255 255

=
255 255 96 127 145 175 255 255 255 255 255 255

255 255 127 145 175 175 175 255 255 255 255 255

255 255 127 145 200 200 175 175 95 255 255 255

255 255 127 145 200 200 175 175 95 47 255 255

255 255 127 145 145 175 127 127 95 47 255 255

255 255 74 127 127 127 95 95 95 47 255 255

255 255 255 74 74 74 74 74 74 255 255 255

255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255 255 255 255 255

(common to use one byte per value: 0 = black, 255 = white)


Images as functions
Images as functions

• A grayscale image can be


considered as a function
from 𝑅2 to 𝑅.
• 𝑓(𝑥, 𝑦) gives the intensity at
position (𝑥, 𝑦).
• Pixel value (or intensity):
[0,255].
Images as functions

• A color image is just three


functions pasted together.
• We can write this as a
“vector-valued” function:

 r ( x, y ) 
f ( x, y ) =  g ( x, y ) 
 
 b( x, y ) 
Image transformation

• Image processing operations are divided to three classes based


on the information required to perform the transformation:
1. Point operations: A pixel's grey value is changed without any
knowledge of its surrounds.
2. Neighborhood processing: To change the grey level of a
given pixel we need to know the value of the grey levels in a
small neighborhood of pixels around the given pixel.
3. Transforms: the entire image is processed as a single large
block.
Point Processing

• The simplest kinds of image processing transforms are point


operations, where each output pixel’s value depends on only the
corresponding input pixel value.
• Although point operations are the simplest, they contain some of
the most powerful and widely used of all image processing
operations.
• They are especially useful in image pre-processing, where an
image is required to be modified before the main job is
attempted.
Arithmetic operations of grayscale images

• These operations act by applying a simple function 𝑦 = 𝑓 𝑥 to


each gray value in the image.
• Thus, 𝑓 𝑥 is a function in the range from 0 to 255.
• Simple functions include adding or subtract a constant value to
each pixel:
𝑦 =𝑥±𝐶
• or multiplying each pixel by a constant:
𝑦 = 𝐶𝑥
Arithmetic operations of grayscale images

• In each case, we may have to vary the output slightly in order to


ensure that the results are integers in the range of 0 to 255.

• We can do this by limiting the values by setting:


255 𝑖𝑓 𝑦 > 255,
𝑦←ቊ
0 𝑖𝑓 𝑦 < 0.
Arithmetic operations of grayscale images

• Thus, when adding 128, all gray values of 127 or greater will be
mapped to 255.
• And when subtracting 128, all gray values of 128 or less will be
mapped to 0.
• In general, adding a constant will lighten an image, and
subtracting a constant will darken it.
Arithmetic operations of grayscale images
Arithmetic operations of grayscale images
Arithmetic operations of grayscale images

g (x,y) = f (x,y) + 20 g (x,y) = f (-x,y)


Arithmetic operations of grayscale images

• Lightening or darkening of an image can be performed by


multiplication.
• Note that b3, although darker than the original, is still quite clear,
whereas a lot of information has been lost by the subtraction
process, as can be seen in image b2.
• This is because in image b2 all pixels with gray values 128 or
less have become zero.
Arithmetic operations of grayscale images
Arithmetic operations of grayscale images
Complements

• The complement of a grayscale image is its photographic


negative.
Complements

• Interesting special effects can be obtained by complementing


only part of the image. For example:
➢ by taking the complement of pixels of gray value 128 or less,
and leaving other pixels untouched.
➢ Or taking the complement of pixels which are 128 or greater,
and leave other pixels untouched.
• The effect of these functions is called solarization.
Complements
Histograms
• Given a grayscale image, its histogram consists of the histogram of its gray
levels; that is, a graph indicating the number of times each gray level
occurs in the image.
• We can infer a great deal about the appearance of an image from its
histogram, as the following examples indicate:
➢ In a dark image, the gray levels (and hence the histogram) would be
clustered at the lower (left) end.
➢ In a uniformly bright image, the gray levels would be clustered at the
upper (right) end.
➢ In a well contrasted image, the gray levels would be well spread out
over much of the range.
Histograms
Histograms

• From the result shown in the previous figure, and since the gray
values are all clustered together in the center of the histogram, we
would expect the image to be poorly contrasted, as indeed it is.

• Given a poorly contrasted image, we would like to enhance its


contrast, by spreading out its histogram. There are two ways of
doing this:
1. Histogram stretching (Contrast stretching).
2. Histogram equalization.
Histogram stretching
• Suppose a 4-bit grayscale image with the histogram shown in the next
figure, associated with a table of the numbers 𝑛𝑖 of gray values (𝑛 = 360):

• We can stretch the gray levels in the center of the range out by applying
the linear function shown at the right in the same figure. This function has
the effect of stretching the gray levels 5 − 9 to gray levels 2 − 14 according
to the equation:
14 − 2
𝑗= 𝑖−5 +2
9−5
Histogram stretching
Histogram stretching

• Where 𝑖 is the original grey level and 𝑗 is its result after the
transformation.
• Gray levels outside this range are either left alone (as in this case) or
transformed according to the linear functions at the ends of the graph
above. This yields:
Histogram stretching
• And the corresponding histogram indicates an image with greater
contrast than the original:
Histogram stretching
Histogram equalization

• The trouble with the method of histogram stretching is that they


require user input.
• Sometimes a better approach is provided by histogram equalization,
which is an entirely automatic procedure.
• Suppose a 4-bit grayscale image has the histogram shown in the
next figure, associated with a table of the numbers of 𝑛𝑖 gray values
(𝑛 = 360):
Histogram equalization
Histogram equalization

• We would expect this image to


be uniformly bright, with a few
dark dots on it.
• To equalize this histogram, we
form running totals of the 𝑛𝑖 ,
15 1
and multiply each by = .
360 24
• 15 is 24 − 1, while 360 is the
total number of pixels in the
image.
Histogram equalization

• We now have the following transformation of grey values, obtained


by reading the first and last columns in the above table:

• and the histogram of the 𝑗 values is shown in next figure.


• This is far more spread out than the original histogram, and so the
resulting image should exhibit greater contrast.
Histogram equalization
And again:
After histogram equalization:
Another example:
After histogram equalization:

You might also like