0% found this document useful (0 votes)
236 views62 pages

15EC72 - Digital Image Processing 2018-19 PDF

The document discusses digital image processing and provides examples of its applications. It begins by defining digital image processing as the use of computer algorithms to process digital images. It then discusses the origins of digital image processing in early applications like transmitting newspaper pictures via transatlantic cable in the 1920s. Finally, it provides examples of fields that use digital image processing, including medical imaging, remote sensing, astronomy, and machine vision.

Uploaded by

Akshitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
236 views62 pages

15EC72 - Digital Image Processing 2018-19 PDF

The document discusses digital image processing and provides examples of its applications. It begins by defining digital image processing as the use of computer algorithms to process digital images. It then discusses the origins of digital image processing in early applications like transmitting newspaper pictures via transatlantic cable in the 1920s. Finally, it provides examples of fields that use digital image processing, including medical imaging, remote sensing, astronomy, and machine vision.

Uploaded by

Akshitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

S J P N Trust's

ECE Dept.
Hirasugar Institute of Technology, Nidasoshi. DIP
Inculcating Values, Promoting Prosperity VII Sem
Approved by AICTE, Recognized by Govt. of Karnataka and Affiliated to VTU Belagavi 2018-19

Department of Electronics & Communication Engg.

Course: Digital Image Processing - 15EC72. Sem.: 7th (2018-19)

Course Coordinator:
Prof. B. P. Khot
Digital Image Fundamentals

2
Outline
1. What is Digital Image Processing?
2. Origins of Digital Image Processing
3. Examples of fields that use DIP
4. Fundamental Steps in Digital Image Processing
5. Components of an Image Processing System
6. Elements of Visual Perception
7. Image Sensing and Acquisition
8. Image Sampling and Quantization
9. Some Basic Relationships Between Pixels
10. Linear and Nonlinear Operations

3
1. What is Digital Image Processing?
Digital Image Processing

• Digital image processing is the use of computer algorithms to


perform image processing on digital images.

Digital Image

• A digital image is a numeric representation, normally binary, of a two-


dimensional image.
• Composed of a finite number of elements, each of which has a particular
location and value. These elements referred to as picture elements or
pixel.

Image
• An image can be defined as the variations of intensity in the space so, in
image, intensity is the function of spatial co ordinates.
• An image can be formally defined as a two dimensional function f(x, y)
where x and y are spatial co-ordinates. f(x, y)

Intensity
• The amplitude of f at any pair of co ordinates (x, y) is called the
4 intensity of the image
2. Origins of Digital Image Processing

1. It is interesting to note that X-rays were discovered in 1895 by Wilhelm


Conrad Roentgen,
2. One of the first applications of digital images was in the newspaper
industry, when pictures were first sent by submarine cable between London
and New York. Introduction of the Bartlane cable picture transmission
system in the early 1920s reduced the time required to transport a picture
across the Atlantic from more than a week to less than three hours.
Specialized printing equipment coded pictures for cable transmission and
then reconstructed them at the receiving end.

Fig 1.1 Digital Picture produced in 1921from a Coded Tape by a Telegraph


Printer with special type Faces.
Fig 1.1 was transmitted in this way and reproduced on a telegraph printer
fitted with typefaces simulating a halftone pattern.

5
Fig1.2 Digital Picture Made in 1922 from a Tape Punched device after the
Signals had crossed the Atlantic Twice.
3. Figure 1.2 , is the improvement over Fig. 1.1 both in tonal quality and in
resolution.
4. The advancement of digital imagery continued in the early 1960s, alongside
development of the space program and in medical research. Projects at the Jet
Propulsion Laboratory, MIT, Bell Labs and the University of Maryland, among
others, used digital images to advance satellite imagery, medical
imaging, videophone technology, character recognition, and photo enhancement.
5. Rapid advances in digital imaging began with the introduction
of microprocessors in the early 1970s, alongside progress in related storage and
display technologies.

6
6. The first computers powerful enough to carry out meaningful image
processing tasks appeared in the early 1960s. The birth of what we call
digital image processing today can be traced to the availability of those
machines and to the onset of the space program during that period.
7. Work on using computer techniques for improving images from a
space probe began at the Jet Propulsion Laboratory (Pasadena,
California) in 1964 when pictures of the moon transmitted by Ranger
7 were processed by a computer to correct various types of image
distortion inherent in the on-board television camera.

Fig 1.3 The first picture of the moon by a U.S. spacecraft. Ranger 7 took
this image on July 31, 1964 at 9:09 A.M. EDT, about 17 minutes before
impacting the lunar surface.

7
8. In parallel with space applications, digital image processing techniques
began in the late 1960s and early 1970s to be used in medical imaging,
remote Earth resources observations, and astronomy
9. The invention in the early 1970s of computerized axial tomography
(CAT), also called computerized tomography (CT). It is one of the most
important events in the application of image processing in medical
diagnosis.
10. Introduction by IBM of the personal computer in 1981
11. Digital image processing technology for the space foundation space
technology improved in 1994.

8
3. Examples of fields that use DIP

1. Medical Imaging
• Radiology
• X- rays Images
• Ultrasound Scanned Images
• Computed Tomography(CT)
• PET and SPECT
• Magnetic Resonance Imaging(MRI)
• Digital Infrared Thermal Imaging(DITI)
• Electro Encephalography(EEG)
• Electro Cardiography (ECG)
2. Remote sensing
3. Astronomy
4. Business
5. Entertainment
6. Security and Surveillance
7. Machine/Robot vision
8. Colour processing

9
1. Medical Imaging
Radiology
• Radiology is the science that uses medical imaging to diagnose and
sometimes also treat diseases within the body.
• Radiology refers to examinations of the inner structure of objects using X-rays
or other penetrating radiation.
• It includes images from X-rays, X-ray radiography, ultrasound, computed
tomography (CT), including positron emission tomography (PET),
and magnetic resonance imaging (MRI).

X-rays Images
• X-rays are a type of radiation (0.01 to 10 nm) called electromagnetic waves.
• X-ray imaging creates pictures of the inside of your body. The images show
the parts of your body in different shades of black and white.
• This is because different tissues absorb different amounts of radiation.
Calcium in bones absorbs x-rays the most, so bones look white.
• Fat and other soft tissues absorb less, and look gray. Air absorbs the least, so
lungs look black.

10
11
1. Medical Imaging Cont’d
Ultrasound Scanned Images
• Ultrasound imaging uses sound waves to produce pictures of the inside of the
body. It is used to help diagnose the causes of pain, swelling and infection in
the body’s internal organs and to examine a baby in pregnant women and the
brain and hips in infants.
• Ultrasound frequency around 20,000 Hz is projected on the organ.
Examples:
1. Sonography: Is an ultrasound based diagnostic medical imaging used to
visualise muscles, tendons and many internal organs, to capture their size and
structure
2. obstetric Sonography: Is used during pregnancy to visualise fetus.

12
1. Medical Imaging Cont’d
Computed Tomography(CT)
• Computed tomography (CT) is a diagnostic imaging test used to create
detailed images of internal organs, bones, soft tissue and blood vessels
with the help of x-rays.
• The cross-sectional images generated during a CT scan can be reformatted in
multiple planes, and can even generate three-dimensional images which can be
viewed on a computer monitor, printed on film.
• CT scanning is often the best method for detecting many different cancers
since the images allow your doctor to confirm the presence of a tumour and
determine its size and location.
• Examples:
1. Diagnose head, lungs, cardiac, abdominal and pelvic

13
1. Medical Imaging Cont’d
Magnetic Resonance Imaging(MRI)
• An MRI or magnetic resonance imaging is a radiology technique scan that uses
magnetism, radio waves, and a computer to produce images of body
structures.
• The MRI scanner is a tube surrounded by a giant circular magnet. The patient
is placed on a moveable bed that is inserted into the magnet. The magnet
creates a strong magnetic field that aligns the protons of hydrogen atoms,
which are then exposed to a beam of radio waves. This spins the various
protons of the body, and they produce a faint signal that is detected by the
receiver portion of the MRI scanner. A computer processes the receiver
information, which produces an image.
• Examples:
1. Brains, Muscles, heart and Cancers

14
1. Medical Imaging Cont’d
PET and SPECT (Single Photon Emission Computed Tomography)
1. A positron emission tomography, also known as a PET scan, uses radio active
substance radiation to show activity within the body on a cellular level.

Examples: It is most commonly used in cancer treatment, neurology,


and cardiology.

2. Single-photon emission computed tomography (SPECT, or less


commonly, SPET) is a nuclear medicine tomographic imaging technique
using gamma rays (less than 10 Pico meter)

Examples: Analyse the functioning of Cardiac or brain.

15
1. Medical Imaging Cont’d
Digital Infrared Thermal Imaging(DITI)
• An infrared scanning device is used to analse infrared radiation emitted from
the skin surface into electrical impulses that are visualised in colour on a
monitor. The spectrum of colours indicate an increase or decrease in the
amount of infrared radiation being emitted from the body surface.

Electro Encephalography(EEG)
• An electroencephalogram (EEG) is a test that detects electrical activity in
your brain using small, metal discs (electrodes) attached to your scalp.
Your brain cells communicate via electrical impulses and are active all the
time, even when you're asleep. This activity shows up as wavy lines on an EEG
recording.

16
1. Medical Imaging Cont’d
Electro Cardiography (ECG)
• Electrocardiography (ECG or EKG) is the process of recording the
electrical activity of the heart over a period of time using electrodes placed
on the skin. These electrodes detect the tiny electrical changes on the skin
that arise from the heart muscle's electrophysiologic pattern during
each heartbeat. It is very commonly performed to detect any cardiac
problems.

17
2) Remote sensing
• Remote sensing is the gathering of information about an object, area or
phenomenon without being in physical contact with it.
• Images acquired by satellites are used in remote sensing i.e., tracking of
earth resources, prediction of agricultural crops, urban growth,
weather forecasting, flood control .

3) Astronomy
• Image processing is used in astronomy to analyse the solar system and
celestial bodies like moon, star and other planets.

4) Business
• Digital Image transmission helps in journalism. People from different
countries can work together, using teleconferencing through which people
can communicate seeing each other on the displays.

5) Entertainment
• Digital videos can be broadcasted and can be received by television.
Videos can be transmitted through internet in YouTube. Video games are
because of image processing.

18
6) Security and Surveillance
• Image processing is used in small target detection and tracking, missile
guidance vehicle navigation, wide area surveillance and automated target
recognition can be done using image processing biometric image processing
for personal authentication and identification.

7) Machine/Robot vision
• Apart form the many challenges that a robot face today , one of the biggest
challenge still is to increase the vision of the robot. Make robot able to see
things , identify them , identify the hurdles etc. Much work has been
contributed by this field and a complete other field of computer vision has
been introduced to work on it.

8) Colour image processing


• Colour image processing is an area that has been gaining its importance
because of the significant increase in the use of digital images over the
Internet.
• This may include colour modelling and processing in a digital domain.

19
4. Fundamental Steps in Digital Image Processing

20
Fundamental Steps in Digital Image Processing Cont’d

1) Image Acquisition:
• This is the first step or process of the fundamental steps of digital image processing.
• It gives information about how to acquire an image i.e. about image origin.
• Image acquisition stage involves pre-processing such as scaling.
• Scaling is reducing or increasing the physical size of the image by changing the number
of pixels.
• Image acquisition gives the image in digital form.

2) Image Enhancement:
• Image enhancement technique is used to bring out details that is obscured or simply
highlight certain feature of interest in image.
• Image enhancement is subjective process. Mathematical tools are used for enhancing
the image.
• Basically, the idea behind enhancement technique is to bring out detail that is obscured,
or simply to highlight certain features of interest in an image. Such as, changing
brightness & contrast etc.

21
Fundamental Steps in Digital Image Processing Cont’d
3) Image Restoration:
• Image restoration is an area that also deals with improving the appearance of an image.
However, unlike enhancement, which is subjective, image restoration is objective, in
the sense that restoration techniques tend to be based on mathematical or probabilistic
models.
• Image restoration is removal of noise.

4) Colour Image Processing:


• Colour image processing is an area that has been gaining its importance because of the
significant increase in the use of digital images over the Internet.
• This may include colour modelling and processing in a digital domain.

5) Wavelets and Multi-Resolution Processing:


• Wavelets are the foundation for representing images in various degrees of resolution.
• Wavelets are used in image data compression

22
Fundamental Steps in Digital Image Processing Cont’d

6) Compression:
• Compression is a technique used for reducing the storage required to save an image or
the bandwidth to transmit it.
• Compression is useful in internet which enables sending of pictures.
• Ex: JPEG

7) Morphological Processing:
• Morphological processing deals with tools for extracting image components that are
useful in the representation and description of shape.
• Morphological processing helps in process output image features.

8) Segmentation:
• Segmentation procedures partition an image into its constituent parts or objects.
• Segmentation procedure helps in object identification.

23
Fundamental Steps in Digital Image Processing Cont’d
9. Representation and Description:
• Representation and description almost always follow the output of a segmentation
stage, which is usually raw pixel data.
• Segmentation output is raw pixel data so it is necessary to convert the data to a form
suitable for computer processing.
• There are two types of representation
 Boundary representation: it is suitable when the focus is on external shape.
 Regional representation : it is appropriate when the focus is on internal characteristics.
• Description: Description deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one class of objects
from another.

10) Object recognition:


• Recognition is the process that assigns a label to an object based on its description.
• Ex: “vehicle” to an object based on its descriptors.

11) Knowledge Base:


• It is a special kind of data base for knowledge management
• It gives knowledge about problem domain in image processing system.
• It guides operation of each possible module.
• It also controls interaction between module.
24
5. Components of an Image Processing System

25
Components of an Image Processing System Cont’d

I. Image Sensors:
• Image Sensor is a physical device that is sensitive to the energy radiated by the object
that we wish to capture.
• In digital video camera, the sensors produce an electrical output proportional to light
intensity. Example: Charge Couple Device (CCD), Photo Diode.

II. Specialized Image Processing Hardware:


• It consist of digitizer, which converts output of physical sensing device into digital
form.
• It helps in removal of noise from image.
• It consist of hardware that performs ALU operation.
• This hardware is also called as Front End System.
• This unit perform function fast(Gives fast throughput).

III. Computer:
• Image Processing requires intensive processing capability to handle large data. So
computer to super computers are required.

IV. Software:
• It consist of specialized module that performs specific task such as enhancing image,
26 filtering image.
Components of an Image Processing System Cont’d

V. Mass Storage:
• Storage space is most required in digital image processing system.
• Image processing system deals with thousands or millions of image.
• Uncompressed data images may take large space.
• Storage space for digital image processing system falls into three major categories.
 Short term Storage: During image processing in computer
 On-line storage: For relatively fast call (for frequent use) Ex. Drive, Cloud, Drop box.
 Archival storage: Characterized as infrequent access Ex: Magnetic Tapes and Optical
Disk.

VI. Image Display:


• Display are part of computer system, some time it is necessary to have stereo
display(3D)

VII.Hard Copy:
• Laser Printer, Digital Printing.

VIII.Networking:
• It helps in transmission.
• The key factor for image transmission bandwidth.
27
6. Elements of Visual Perception

28
Figure shows a simplified horizontal cross section of the human eye.
Elements of Visual Perception Cont’d
1. The eye is nearly a sphere, with an average diameter of approximately 20 mm.
2. Three membranes enclose the eye
• The cornea and sclera outer cover
• The choroid
• The retina

3. Cornea :The cornea is a tough, transparent tissue that covers the anterior
surface of the eye.

4. Sclera is Continuous with the cornea, the sclera is an opaque membrane that
encloses the remainder of the optic globe.

5. The choroid lies directly below the sclera. This membrane contains a network
of blood vessels that serve as the major source of nutrition to the eye. The
choroid coat is heavily pigmented and hence helps to reduce the amount of
extraneous light entering the eye. At its anterior extreme, the choroid is divided
into the ciliary body and the iris diaphragm.

29
Elements of Visual Perception Cont’d

6. The iris contracts or expands to control the amount of light


that enters the eye.
• The front of the iris contains the visible pigment of the eye, whereas
the back contains a black pigment.

7. The lens is made up of concentric layers of fibrous cells and is suspended by


fibres that attach to the ciliary body.
• It contains 60 to 70%water, about 6%fat, and more protein than any
other tissue in the eye.
• The lens is coloured by a slightly yellow pigmentation that increases with
age.
• The lens absorbs approximately 8% of the visible light spectrum. Both
infrared and ultraviolet light are absorbed by proteins within the
lens structure and in excessive amounts, can damage the eye.

30
Elements of Visual Perception Cont’d

8. Retina :The innermost membrane of the eye is the retina, which lines the
inside of the wall’s entire posterior portion.
• When the eye is properly focused, light from an object outside the eye is
imaged on the retina. The light receptors are distributed over the surface of
the retina.
• There are two classes of receptors: cones and rods.

9. Cones:
• The cones in each eye number between 6 and 7 million.
• They are located primarily in the central portion of the retina, called the
fovea, and are highly sensitive to color. Humans can resolve fine details
with these cones largely because each one is connected to its own nerve
end.
• Muscles controlling the eye rotate the eyeball until the image of an object of
interest falls on the fovea.
• Cone vision is called photopic or bright-light vision.

31
Elements of Visual Perception Cont’d

10.Rods: The number of rods is much larger:


• 75 to 150 million are distributed over the retinal surface.
• Rods give a general overall picture of field of view as several rods are
connected to a single nerve end.
• They are not involved in colour vision and are sensitive to low levels of
illumination. For example, objects that appear brightly colour in daylight
when seen by moonlight appear as colourless forms because only the rods
are stimulated. This phenomenon is known as scotopic or dim-light vision.

32
Image Formation in the Eye

1. Lens of the eye is flexible.


2. Shape of the lens is controlled by tension in the fibres of the ciliary body.
3. To focus on distant object controlling muscle cause lens to be relatively
flattened.
4. Ciliary muscle allows lens to become thicker in order to focus on objects
near the eye.
5. Distance between centre of lens and retina is called as focal length. It varies
33 from approximately 14mm to 17mm
Brightness Adaptation and Discrimination:

Figure : a plot of light intensity versus subjective brightness, illustrates this


34
characteristic.
Brightness Adaptation and Discrimination Cont’d

1. The range of light intensity levels to which the human visual system can adapt
is enormous from the scotopic threshold to the glare limit.
2. Experimental evidence indicates that subjective brightness (intensity as
perceived by the human visual system) is a logarithmic function of the light
intensity incident on the eye.
3. The long solid curve represents the range of intensities to which the visual
system can adapt.
4. In photopic vision alone, the range is about 106. The transition from scotopic to
photopic vision is gradual over the approximate range from 0.001 to 0.1
millilambert (–3 to –1 mL in the log scale), as the double
branches of the adaptation curve in this range show.

35
Digital Image Acquisition Process

Fig. An example of the digital image acquisition process


(a) Energy (“illumination”) source (b) An element of a scene
(c) Imaging system (d) Projection of the scene onto the image plane (e)
36 Digitized image
7. Image Sensing and Acquisition
1. The types of images in which we are interested are generated by the
combination of an “illumination” source and the reflection or absorption of
energy from that source by the elements of the “scene” being imaged.

(a) Single imaging


Sensor

(b) Line sensor

(c) Array sensor

37
Image Acquisition Using a Single Sensor

The idea is simple: Incoming energy is transformed into a voltage by the


combination of input electrical power and sensor material that is responsive to the
particular type of energy being detected.
The output voltage waveform is the response of the sensor(s), and a digital quantity is
obtained from each sensor by digitizing its response.
Figure shows the components of a single sensor. Perhaps the most familiar sensor of this
type is the photodiode, which is constructed of silicon materials and whose output
voltage waveform is proportional to light. The use of a filter in front of a sensor
improves selectivity. For example, a green (pass) filter in front of a light sensor favours
light in the green band of the colour spectrum. As a consequence, the sensor output will
be stronger for green light than for other components in the visible spectrum.

38
Image Acquisition Instrument using single sensor

Fig. Combining a single sensor with motion to generate a 2-D image


In order to generate a 2-D image using a single sensor, there has to be relative
displacements in both the x- and y-directions between the sensor and the area to be
imaged. Fig. shows an arrangement used in high-precision scanning, where a film
negative is mounted onto a drum
whose mechanical rotation provides displacement in one dimension. The single
sensor is mounted on a lead screw that provides motion in the perpendicular
direction. Since mechanical motion can be controlled with high precision, this
method is an inexpensive (but slow) way to obtain high-resolution images. Other
similar mechanical arrangements use a flat bed, with the sensor moving in two
linear directions. These types of mechanical digitizers sometimes are referred to
39 as microdensitometers.
Image Acquisition using Sensor Strip

Fig. (a) Image acquisition using a linear sensor strip (b) Image acquisition using a
40
circular sensor strip.
Image Acquisition using Sensor Strip Cont’d
• Fig.(a).This is the type of arrangement used in most flat bed scanners. Sensing devices
with 4000 or more in-line sensors are possible. In-line sensors are used routinely in
airborne imaging applications, in which the imaging system is mounted on an aircraft that
flies at a constant altitude and speed over the geographical area to be imaged. One
dimensional imaging sensor strips that respond to various bands of the electromagnetic
spectrum are mounted perpendicular to the direction of flight. The imaging strip gives
one line of an image at a time, and the motion of the strip completes the other dimension
of a two-dimensional image. Lenses or other focusing schemes are used to project the
area to be scanned onto the sensors.

• Sensor strips mounted in a ring configuration are used in medical and industrial imaging
to obtain cross-sectional (“slice”) images of 3-D objects, as Fig.(b) shows. A rotating X-
ray source provides illumination and the portion of the sensors opposite the source collect
the X-ray energy that pass through the object (the sensors obviously have to be sensitive
to X-ray energy).This is the basis for medical and industrial computerized axial
tomography (CAT). It is important to note that the output of the sensors must be
processed by reconstruction algorithms whose objective is to transform the sensed data
into meaningful cross-sectional images. I n other words, images are not obtained directly
from the sensors by motion alone; they require extensive processing. A 3-D digital
volume consisting of stacked images object is moved in a perpendicular to the sensor
ring. Other modalities of imaging is generated as the direction based on the CAT
principle include magnetic resonance imaging (MRI) and positron emission tomography
(PET).The illumination sources, sensors, and types of images are different, but
41 conceptually they are very similar to the basic imaging approach shown in Fig. (b).
Digital Image Acquisition Process

An example of the digital image acquisition process (a) Energy (“illumination”)


source (b) An element of a scene (c) Imaging system (d) Projection of the scene
onto the image plane (e) Digitized image
42
Spatial Resolution

Fig. A 1024*1024, 8-bit image subsample down to size 32*32 pixels The
number of allowable gray levels was kept at 256.
43
Spatial Resolution Cont’d

• The subsampling was accomplished by deleting the appropriate number of rows


and columns from the original image.
• The 512*512 image was obtained by deleting every other row and column from the
1024*1024 image.
• The 256*256 image was generated by deleting every other row and column in the
512*512 image.
• The 128*128 image was generated by deleting every other row and column in the
256*256 image.
• The 64*64 image was generated by deleting every other row and column in the
128*128 image.
• The 32*32 image was generated by deleting every other row and column in the
44 64*64 image.
Spatial Resolution Cont’d

Fig. (a) 1024*1024, 8-bit image (b) 512*512 image resample into 1024*1024
pixels by row and column duplication (c) through (f) 256*256, 128*128, 64*64,
and 32*32 images resample into 1024*1024 pixels.
A very slight fine checkerboard will appear in 256X256 image. These effects
are much more visible in the 128*128 image and they become pronounced in
the 64*64 and 32*32 images.
45
Gray Level Resolution

Fig. (a),(b), (c), (d) Fig. (e),(f), (g), (h)

Fig. (a) 452*374, 256-level image (b)–(d) Image displayed in 256, 128, 64, and 32 gray
levels, while keeping the spatial resolution constant (e)–(h) Image displayed in 16, 8, 4,
46 and 2 gray levels.
Gray Level Resolution

Fig. (a),(b), (c), (d) Fig. (e),(f), (g), (h)


While keeping the spatial resolution constant, The 256-, 128-, and 64-level images are
visually identical for all practical purposes. The 32-level image however, has an almost
imperceptible set of very fine ridge like structures in areas of smooth gray levels. This
effect, caused by the use of an insufficient number of gray levels in smooth areas of a
digital image, is called false contouring. False contouring generally is quite visible in
47 images displayed using 16 or less uniformly spaced gray levels.
Representing Digital Images

We will use two principal ways to represent digital images. Assume than image f(x, y)
is sampled so that the resulting digital image has M rows and N columns.
The values of the coordinates (x, y) now become discrete quantities. For notational
clarity and convenience, we shall use integer values for these discrete coordinates.
Thus, the values of the coordinates at the origin are
(x, y) = (0, 0). The next coordinate values along the first row of the image are
represented as (x, y) = (0, 1). It is important to keep in mind that the notation (0,1) is
used to signify the second sample along the first row. It does not mean that these are
the actual values of physical coordinates when the image was sampled. Figure shows
the coordinate convention used.

48
Representing Digital Images
The notation introduced in the preceding paragraph allows us to write the complete
M*N digital image in the following compact matrix form:

The right side of this equation is by definition a digital image. Each element of this
matrix array is called an image element, picture element, pixel, or pel.

49
8. Image Sampling and Quantization:

Fig. Generating a digital image


(a) Continuous image (b) A scan line from A to Bin the continuous image, used to
illustrate the concepts of sampling and quantization (c) Sampling and quantization. (d)
Digital scan line
50
Image Sampling and Quantization Cont’d
The output of most sensors is a continuous voltage waveform whose amplitude and
spatial behaviour are related to the physical phenomenon being sensed. To create a digital
image, we need to convert the continuous sensed data into digital form.

This involves two processes:


1. Sampling
2. quantization.
• Figure (a) shows a continuous image, f(x, y), that we want to convert to digital form.
An image may be continuous with respect to the x- and y-coordinates, and also in
amplitude. To convert it to digital form, we have to sample the function in both
coordinates and in amplitude. Digitizing the coordinate values is called sampling.
Digitizing the amplitude values is called quantization.
• The one-dimensional function shown in Fig. (b) is a plot of amplitude (gray level)
values of the continuous image along the line segment AB in Fig. 6.1(a).The random
variations are due to image noise.
• To sample this function, we take equally spaced samples along line AB, as shown in
Fig.(c).

51
Image Sampling and Quantization Cont’d
• The location of each sample is given by a vertical tick mark in the bottom part of
the figure (c). The samples are shown as small white squares superimposed on the
function. The set of these discrete locations gives the sampled function. However,
the values of the samples still span (vertically) a continuous range of gray-level
values. In order to form a digital function, the gray-level values also must be
converted (quantized) into discrete quantities.
• The right side of Fig.(c) shows the gray-level scale divided into eight discrete levels,
ranging from black to white. The vertical tick marks indicate the specific value
assigned to each of the eight gray levels. The continuous gray levels are quantized
simply by assigning one of the eight discrete gray levels to each sample. The
assignment is made depending on the vertical proximity of a sample to a vertical
tick mark.
• The digital samples resulting from both sampling and quantization are
shown in Fig.(d). Starting at the top of the image and carrying out
this procedure line by line produces a two-dimensional digital image.

52
9. Some Basic Relationships Between Pixels

• Neighbours of a Pixel:

A pixel p at coordinates (x, y) has four horizontal and vertical neighbours whose
coordinates are given by (x+1, y), (x-1, y), (x, y+1), (x, y-1). This set
of pixels, called the 4-neighbors of p, is denoted by N4 (p). Each
pixel is a unit distance from (x, y), and some of the neighbours of p
lie outside the digital image if (x, y) is on the border of the image.

The four diagonal neighbors of p have coordinates (x+1, y+1),(x+1,y-1),(x-


1,y+1),(x-1,y-1) and are denoted by ND (p). These points, together with
the 4-neighbors,are called the 8-neighbors of p, denoted by N8 (p).
As before, some of the points in ND (p) and N8 (p) fall outside
the image if (x, y) is on the border of the image.

53
Some Basic Relationships Between Pixels Cont’d

Adjacency
• Let V be the set of gray-level values used to define adjacency. In a binary
image, V={1} if we are referring to adjacency of pixels with value 1. In a
gray scale image, the idea is same, but set V typically contains more
elements. For example, in the adjacency of pixels with a range of
possible gray-level values 0 to 255, set V could be any subset of these 256
values.
• We consider three types of adjacency:
(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the
set N4(p).
(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the
set N8 (p).
(c) m-adjacency (mixed adjacency).Two pixels p and q with values from V are m-
adjacent if
(i) q is in N4 (p), or

(ii) q is in ND (p) and the set has no pixels whose values are from V.

54
Some Basic Relationships Between Pixels Cont’d

Example Fig. (a) Arrangement of pixels;


(b) pixels that are 8-adjacent (shown dashed) to the center pixel;
(c) m-adjacency

55
Some Basic Relationships Between Pixels Cont’d

Distance Measures: For pixels p, q, and z, with coordinates (x, y), (s, t),
and (v, w), respectively, D is a distance function or metric if

1. The Euclidean distance between p and q is defined as

For this distance measure, the pixels having a distance less than or equal to some
value r from(x, y) are the points contained in a disk of radius r centered at (x, y).

56
Some Basic Relationships Between Pixels Cont’d

2. The D4 distance (also called city-block distance) between p and q is


defined as

3. The D8 distance (also called chessboard distance) between p and q is defined


as

57
10. Linear and Nonlinear Operations
• Let H be an operator whose input and output are images H is said to
linear operator if, any two images f and g and two scalar quantity a and b
(constant)
H(a f + b g)=a H(f)+b H(g)---------------------(1)
Result of applying a linear operator to the sum of two images is identical
to applying the operator to the images individually, multiplying the result
by appropriate constant and then adding those results
• Ex: Consider H operator that produce an output image g(x, y) for a given
input image f(x, y)
H[f(x, y)]=g(x, y)
H is said to be linear operator if
H[a1 f1(x, y)+a2 f2(x, y)]= a1 H[f1(x, y)]+a2 H[f2(x, y)]
=a1 g1(x, y)+a2 g2(x, y)---------------(2)
Where a1and a2 are constant, f1(x, y) and f2(x, y) images of same size
Equation (2) indicates output of a linear operation due to sum of two
inputs is the same as performing the operation on the inputs individually
and then summing results. This is property of additivity

58
Linear and Nonlinear Operations Cont’d
• An equation that fails to test equation (1) and (2) are non-linear
• Linear operations are very important in image processing because they
are based on significant theoretical and practical results
• Non linear operations sometime offer better performance but they are not
predictable and most part are not well understand theoretically

59
Examples of Optical Illusion
Optical Illusion: Eye fills in non existing information

60
Examples of simultaneous contrast

61
62

You might also like