0% found this document useful (0 votes)
28 views39 pages

Lec 1

Uploaded by

Riasad Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views39 pages

Lec 1

Uploaded by

Riasad Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

INTRODUCTION TO DIGITAL

IMAGE PROCESSING
ICT4201:DIP
COURSE OBJECTIVE
• This course introduces digital image processing. It focuses on the theory and algorithms for various
operations on images including acquisition and formation, enhancement, segmentation, and
representation.
• By the end of this course, students will be able to:
• Explain how digital images are represented and manipulated in a computer, including reading and writing from
storage, and displaying.
• Write a program which implements fundamental image processing algorithms.
• Be conversant with the mathematical description of image processing techniques and know how to go from the
equations to code.

2
TEXT

• Digital Image Processing , 4th Edition

• Authors
• Rafael C. Gonzalez
• Richard E. Woods

• Pearson

3
WHAT IS DIGITAL IMAGE PROCESSING?
• Digital Image Processing means processing digital image by means of a digital
computer. We can also say that it is a use of computer algorithms, in order to get
enhanced image either to extract some useful information.
• Digital image processing is the use of algorithms and mathematical models to
process and analyze digital images. The goal of digital image processing is to
enhance the quality of images, extract meaningful information from images, and
automate image-based tasks.
• Digital image processing focuses on two major tasks
• Improvement of pictorial information for human interpretation
• Processing of image data for storage, transmission and representation for autonomous machine perception

4
ORIGIN OF IMAGE PROCESSING
• One of the earliest applications of digital images was in the newspaper industry, when pictures were
first sent by submarine cable between London and New York.
• The cable picture transmission in 1921 reduced the time required to transport a picture across the
Atlantic for more than a week to less than 3 hours.
• Now just imagine , that today we are able to see live video feed, or live CCTV footage from one
continent to another with just a delay of seconds. It means that a lot of work has been done in this
field too. This field does not only focus on transmission, but also on encoding. Many different
formats have been developed for high or low bandwidth to encode photos and then stream it over
the internet or e.t.c.

5
APPLICATIONS OF DIP

• Some of the major fields in which digital image processing is widely used are mentioned below
• Image sharpening and restoration
• Medical field
• Remote sensing
• Transmission and encoding
• Machine/Robot vision
• Color processing
• Pattern recognition
• Video processing
• Microscopic Imaging
• Others 6
APPLICATIONS ON MEDICAL FIELD

• The common applications of DIP in the field of medical is


• Gamma ray imaging
• PET scan
• X Ray Imaging
• Medical CT
• UV imaging

7
SOURCE OF IMAGING
• One of the simplest ways to develop a basic understanding of the extent of image processing
applications is to categorize images according to their source (e.g., X-ray, visual, infrared, and so
on).
• The principal energy source for images in use today is the electromagnetic energy spectrum.
• Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of
electron beams used in electron microscopy). Synthetic images, used for modeling and
visualization, are generated by computer.

8
SOURCE OF IMAGING… (CONT.)
• In this electromagnetic spectrum, we are only able to see the visible spectrum. Visible spectrum
mainly includes seven different colors that are commonly term as (VIBGOYR). VIBGOYR stands for
violet , indigo , blue , green , orange , yellow and Red.
• But that does not nullify the existence of other stuff in the spectrum. Our human eye can only see
the visible portion, in which we saw all the objects. But a camera can see the other things that a
naked eye is unable to see. For example: x rays , gamma rays , e.t.c. Hence the analysis of all that
stuff too is done in digital image processing.
• why do we need to analyze all that other stuff in EM spectrum too?
• The answer to this question lies in the fact, because that other stuff such as XRay has been widely used
in the field of medical. The analysis of Gamma ray is necessary because it is used widely in nuclear
medicine and astronomical observation. Same goes with the rest of the things in EM spectrum.

9
WHAT IS A DIGITAL IMAGE?
• In image processing, the term ‘image’ is used to denote the image data that is sampled,
quantized, and readily available in a form suitable for further processing by digital
computers.
• An image may be defined as a two-dimensional function, f (x, y), where x and y are
spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is
called the intensity or gray level of the image at that point.
• When x, y, and the intensity values of f are all finite, discrete quantities, we call the
image a digital image.

10
PIXEL
A digital image is a representation of a two-dimensional image as a finite set of
digital values, called picture elements or pixels

11
IMAGE AS MATRIX
• As we know, images are represented in rows and columns we have the
following syntax in which images are represented:
• The right side of this equation is digital image by definition. Every element of this
matrix is called image element , picture element , or pixel.

12
IMPORTANT TERMS
Resolution
Resolution is an important characteristic of an imaging system. It is the ability of the imaging
system to produce the smallest discernable details, i.e., the smallest sized object clearly, and
differentiate it from the neighboring small objects that are present in the image.
Bit Depth
The number of bits necessary to encode the pixel value is called bit depth. Bit depth is a power
of two; it can be written as powers of 2. So the total number of bits necessary to represent the
image is:
Number of rows *Number of columns * Bit depth
13
TYPES OF IMAGES

14
TYPES OF IMAGES….(CONT.)

• In Binary images, the pixels assume a value of 0 or 1. So one bit is sufficient to


represent the pixel value. Binary images are also called bi-level images.

• Grey scale images are different from binary images as they have many shades of
grey between black and white. These images are also called monochromatic as
there is no color component in the image, like in binary images. Grey scale is the
term that refers to the range of shades between white and black or vice versa.

15
TYPES OF IMAGES….(CONT.)
Pseudo color images

Like true color images, Pseudocolour images are also used widely in image
processing. True color images are called three-band images. However, in remote
sensing applications, multi-band images or multi-spectral images are generally used.
These images, which are captured by satellites, contain many bands.

16
HOW A DIGITAL IMAGE IS FORMED

• Since capturing an image from a camera is a physical process. The sunlight is used
as a source of energy.
• A sensor array is used for the acquisition of the image. So when the sunlight falls
upon the object, then the amount of light reflected by that object is sensed by the
sensors, and a continuous voltage signal is generated by the amount of sensed
data.
• In order to create a digital image , we need to convert this data into a digital form.
This involves sampling and quantization. (They are discussed later on). The result
of sampling and quantization results in an two dimensional array or matrix of
numbers which are nothing but a digital image.
17
FUNDAMENTAL STEPS OF DIP

18
FUNDAMENTAL STEPS OF DIP… (CONT.)
1. Image Acquisition – Image acquisition involves preprocessing such as scaling etc. It could be as simple as being given
an image that is already in digital form.
2. Image Enhancement – Basically, enhancement techniques bring out detail that is obscured and highlight certain
features of interest in an image, such as changing brightness & contrast etc.
3. Image Restoration – Image restoration is an area that also deals with improving the appearance of an image. Image
restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic
models of image degradation.
4. Colour Image Processing – Colour image processing is an area that has been gaining its importance because of the
significant increase in the use of digital images over the Internet. This may include colour modelling and processing in
a digital domain etc. On the other hand, enhancement is subjective.
5. Wavelets and Multiresolution Processing – The foundation for representing images in various degrees of resolution is
enabled by wavelets. Images are subdivided into smaller regions for data compression and for pyramidal
representation.
6. Compression – Compression techniques reduce the storage required to save an image or the bandwidth to transmit it.
Particularly for use over the internet, it is very much necessary to compress data.
19
FUNDAMENTAL STEPS OF DIP… (CONT.)

7. Morphological Processing – Morphological processing extracts image components that are


useful in the representation and description of shape.
8. Segmentation – Segmentation procedures partition an image into its constituent parts or
objects. In general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward a successful
solution of imaging problems that require objects to be identified individually.
9. Representation and Description – Representation and description almost always follow the
output of a segmentation stage, which usually is raw pixel data that constitutes either the
boundary of a region or all the points in the region itself. Description deals with extracting
attributes that result in some quantitative information of interest or are basic for differentiating
one class of objects from another.
10. Object recognition – Recognition is the process that assigns a label, such as, “apple” to an object
based on its descriptors.

20
MAIN STEPS OF DIP

• Image processing mainly include the following steps:


1. Importing the image via image acquisition tools;
2. Analyzing and manipulating the image;
3. Output in which result can be altered image or a report which
is based on analyzing that image.

21
COMPONENTS
OF DIP

22
EXAMPLE OF GAMMA-RAY IMAGING
• Figure 1.6 (a) shows an image if a complete
bone scan obtained by using gamma-ray
imaging.
• Figure 1.6 (b) shows a tumor in the brain and
one in the lung.
• Figure 1.6 (c) shows the Cygnus Loop imaged
in the gamma-ray band.
• Figure 1.6 (d) shows an image of gamma
radiation from a valve in a nuclear reactor.

23
X-RAY IMAGES

• Figure 1.7 (a) : Chest X-ray.


• Figure 1.7 (b) : Aortic angiogram.
• Figure 1.7 (c) : Head.
• Figure 1.7 (d) : Circuit boards.
• Figure 1.7 (e) : Cygnus Loop imaged in the
X-ray band.

24
ULTRA VIOLET IMAGING
• In the field of remote sensing , the area of the earth is scanned by a satellite or from a very
high ground and then it is analyzed to obtain information about it. One particular application of
digital image processing in the field of remote sensing is to detect infrastructure damages
caused by an earthquake.
• As it takes longer time to grasp damage, even if serious damages are focused on. Since the area
effected by the earthquake is sometimes so wide , that it not possible to examine it with
human eye in order to estimate damages. Even if it is , then it is very hectic and time
consuming procedure. So a solution to this is found in digital image processing. An image of the
effected area is captured from the above ground and then it is analyzed to detect the various
types of damage done by the earthquake.
• The key steps include in the analysis are
• The extraction of edges
• Analysis and enhancement of various types of edges
25
UV IMAGING

• Figure 1.8 (a) : normal corn


• Figure 1.8 (b) : smut corn
• Figure 1.8 (c) : Cygnus Loop imaged in the
ultraviolet band.
26
IMAGING IN THE VISIBLE AND INFRARED BANDS

• Figure 1.9 (a) : Taxol


• Figure 1.9 (b) : Cholesterol
• Figure 1.9 (c) : Microprocessor
• Figure 1.9 (d) : Nickel oxide thin film
• Figure 1.9 (e) : Surface of audio CD
• Figure 1.9 (f) : Organic superconductor

27
IMAGING IN THE VISIBLE AND INFRARED BANDS

28
INFRARED BANDS IMAGE

29
30
31
IMAGING IN THE MICROWAVE BAND

32
IMAGING IN THE RADIO BAND
MRI: MAGNETIC RESONANCE IMAGING

33
MACHINE/ROBOT VISION
• Apart form the many challenges that a robot face today , one of the biggest
challenge still is to increase the vision of the robot. Make robot able to see things ,
identify them , identify the hurdles e.t.c. Much work has been contributed by this
field and a complete other field of computer vision has been introduced to work on
it.
• Hurdle detection is one of the common task that has been done through image
processing, by identifying different type of objects in the image and then
calculating the distance between robot and hurdles.
• Line follower robot : Most of the robots today work by following the line and
thus are called line follower robots. This help a robot to move on its path and
perform some tasks. This has also been achieved through image processing.
34
Hurdle detection Line follower robot

35
COLOR PROCESSING

• Color processing includes processing of colored images and different color


spaces that are used. For example RGB color model , YCbCr, HSV. It also
involves studying transmission , storage , and encoding of these color images.

36
PATTERN RECOGNITION

• Pattern recognition involves study from image processing and from various other
fields that includes machine learning ( a branch of artificial intelligence). In
pattern recognition , image processing is used for identifying the objects in an
images and then machine learning is used to train the system for the change in
pattern. Pattern recognition is used in computer aided diagnosis , recognition of
handwriting , recognition of images e.t.c]

37
VIDEO PROCESSING

• A video is nothing but just the very fast movement of pictures. The quality of the
video depends on the number of frames/pictures per minute and the quality of
each frame being used. Video processing involves noise reduction , detail
enhancement , motion detection , frame rate conversion , aspect ratio
conversion , color space conversion e.t.c.

38
REFERENCES

• Chapter 1, Digital Image Processing, 4e


• Internet

Thank You

39

You might also like