Lec 1
Lec 1
IMAGE PROCESSING
ICT4201:DIP
COURSE OBJECTIVE
• This course introduces digital image processing. It focuses on the theory and algorithms for various
operations on images including acquisition and formation, enhancement, segmentation, and
representation.
• By the end of this course, students will be able to:
• Explain how digital images are represented and manipulated in a computer, including reading and writing from
storage, and displaying.
• Write a program which implements fundamental image processing algorithms.
• Be conversant with the mathematical description of image processing techniques and know how to go from the
equations to code.
2
TEXT
• Authors
• Rafael C. Gonzalez
• Richard E. Woods
• Pearson
3
WHAT IS DIGITAL IMAGE PROCESSING?
• Digital Image Processing means processing digital image by means of a digital
computer. We can also say that it is a use of computer algorithms, in order to get
enhanced image either to extract some useful information.
• Digital image processing is the use of algorithms and mathematical models to
process and analyze digital images. The goal of digital image processing is to
enhance the quality of images, extract meaningful information from images, and
automate image-based tasks.
• Digital image processing focuses on two major tasks
• Improvement of pictorial information for human interpretation
• Processing of image data for storage, transmission and representation for autonomous machine perception
4
ORIGIN OF IMAGE PROCESSING
• One of the earliest applications of digital images was in the newspaper industry, when pictures were
first sent by submarine cable between London and New York.
• The cable picture transmission in 1921 reduced the time required to transport a picture across the
Atlantic for more than a week to less than 3 hours.
• Now just imagine , that today we are able to see live video feed, or live CCTV footage from one
continent to another with just a delay of seconds. It means that a lot of work has been done in this
field too. This field does not only focus on transmission, but also on encoding. Many different
formats have been developed for high or low bandwidth to encode photos and then stream it over
the internet or e.t.c.
5
APPLICATIONS OF DIP
• Some of the major fields in which digital image processing is widely used are mentioned below
• Image sharpening and restoration
• Medical field
• Remote sensing
• Transmission and encoding
• Machine/Robot vision
• Color processing
• Pattern recognition
• Video processing
• Microscopic Imaging
• Others 6
APPLICATIONS ON MEDICAL FIELD
7
SOURCE OF IMAGING
• One of the simplest ways to develop a basic understanding of the extent of image processing
applications is to categorize images according to their source (e.g., X-ray, visual, infrared, and so
on).
• The principal energy source for images in use today is the electromagnetic energy spectrum.
• Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of
electron beams used in electron microscopy). Synthetic images, used for modeling and
visualization, are generated by computer.
8
SOURCE OF IMAGING… (CONT.)
• In this electromagnetic spectrum, we are only able to see the visible spectrum. Visible spectrum
mainly includes seven different colors that are commonly term as (VIBGOYR). VIBGOYR stands for
violet , indigo , blue , green , orange , yellow and Red.
• But that does not nullify the existence of other stuff in the spectrum. Our human eye can only see
the visible portion, in which we saw all the objects. But a camera can see the other things that a
naked eye is unable to see. For example: x rays , gamma rays , e.t.c. Hence the analysis of all that
stuff too is done in digital image processing.
• why do we need to analyze all that other stuff in EM spectrum too?
• The answer to this question lies in the fact, because that other stuff such as XRay has been widely used
in the field of medical. The analysis of Gamma ray is necessary because it is used widely in nuclear
medicine and astronomical observation. Same goes with the rest of the things in EM spectrum.
9
WHAT IS A DIGITAL IMAGE?
• In image processing, the term ‘image’ is used to denote the image data that is sampled,
quantized, and readily available in a form suitable for further processing by digital
computers.
• An image may be defined as a two-dimensional function, f (x, y), where x and y are
spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is
called the intensity or gray level of the image at that point.
• When x, y, and the intensity values of f are all finite, discrete quantities, we call the
image a digital image.
10
PIXEL
A digital image is a representation of a two-dimensional image as a finite set of
digital values, called picture elements or pixels
11
IMAGE AS MATRIX
• As we know, images are represented in rows and columns we have the
following syntax in which images are represented:
• The right side of this equation is digital image by definition. Every element of this
matrix is called image element , picture element , or pixel.
12
IMPORTANT TERMS
Resolution
Resolution is an important characteristic of an imaging system. It is the ability of the imaging
system to produce the smallest discernable details, i.e., the smallest sized object clearly, and
differentiate it from the neighboring small objects that are present in the image.
Bit Depth
The number of bits necessary to encode the pixel value is called bit depth. Bit depth is a power
of two; it can be written as powers of 2. So the total number of bits necessary to represent the
image is:
Number of rows *Number of columns * Bit depth
13
TYPES OF IMAGES
14
TYPES OF IMAGES….(CONT.)
• Grey scale images are different from binary images as they have many shades of
grey between black and white. These images are also called monochromatic as
there is no color component in the image, like in binary images. Grey scale is the
term that refers to the range of shades between white and black or vice versa.
15
TYPES OF IMAGES….(CONT.)
Pseudo color images
Like true color images, Pseudocolour images are also used widely in image
processing. True color images are called three-band images. However, in remote
sensing applications, multi-band images or multi-spectral images are generally used.
These images, which are captured by satellites, contain many bands.
16
HOW A DIGITAL IMAGE IS FORMED
• Since capturing an image from a camera is a physical process. The sunlight is used
as a source of energy.
• A sensor array is used for the acquisition of the image. So when the sunlight falls
upon the object, then the amount of light reflected by that object is sensed by the
sensors, and a continuous voltage signal is generated by the amount of sensed
data.
• In order to create a digital image , we need to convert this data into a digital form.
This involves sampling and quantization. (They are discussed later on). The result
of sampling and quantization results in an two dimensional array or matrix of
numbers which are nothing but a digital image.
17
FUNDAMENTAL STEPS OF DIP
18
FUNDAMENTAL STEPS OF DIP… (CONT.)
1. Image Acquisition – Image acquisition involves preprocessing such as scaling etc. It could be as simple as being given
an image that is already in digital form.
2. Image Enhancement – Basically, enhancement techniques bring out detail that is obscured and highlight certain
features of interest in an image, such as changing brightness & contrast etc.
3. Image Restoration – Image restoration is an area that also deals with improving the appearance of an image. Image
restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic
models of image degradation.
4. Colour Image Processing – Colour image processing is an area that has been gaining its importance because of the
significant increase in the use of digital images over the Internet. This may include colour modelling and processing in
a digital domain etc. On the other hand, enhancement is subjective.
5. Wavelets and Multiresolution Processing – The foundation for representing images in various degrees of resolution is
enabled by wavelets. Images are subdivided into smaller regions for data compression and for pyramidal
representation.
6. Compression – Compression techniques reduce the storage required to save an image or the bandwidth to transmit it.
Particularly for use over the internet, it is very much necessary to compress data.
19
FUNDAMENTAL STEPS OF DIP… (CONT.)
20
MAIN STEPS OF DIP
21
COMPONENTS
OF DIP
22
EXAMPLE OF GAMMA-RAY IMAGING
• Figure 1.6 (a) shows an image if a complete
bone scan obtained by using gamma-ray
imaging.
• Figure 1.6 (b) shows a tumor in the brain and
one in the lung.
• Figure 1.6 (c) shows the Cygnus Loop imaged
in the gamma-ray band.
• Figure 1.6 (d) shows an image of gamma
radiation from a valve in a nuclear reactor.
23
X-RAY IMAGES
24
ULTRA VIOLET IMAGING
• In the field of remote sensing , the area of the earth is scanned by a satellite or from a very
high ground and then it is analyzed to obtain information about it. One particular application of
digital image processing in the field of remote sensing is to detect infrastructure damages
caused by an earthquake.
• As it takes longer time to grasp damage, even if serious damages are focused on. Since the area
effected by the earthquake is sometimes so wide , that it not possible to examine it with
human eye in order to estimate damages. Even if it is , then it is very hectic and time
consuming procedure. So a solution to this is found in digital image processing. An image of the
effected area is captured from the above ground and then it is analyzed to detect the various
types of damage done by the earthquake.
• The key steps include in the analysis are
• The extraction of edges
• Analysis and enhancement of various types of edges
25
UV IMAGING
27
IMAGING IN THE VISIBLE AND INFRARED BANDS
28
INFRARED BANDS IMAGE
29
30
31
IMAGING IN THE MICROWAVE BAND
32
IMAGING IN THE RADIO BAND
MRI: MAGNETIC RESONANCE IMAGING
33
MACHINE/ROBOT VISION
• Apart form the many challenges that a robot face today , one of the biggest
challenge still is to increase the vision of the robot. Make robot able to see things ,
identify them , identify the hurdles e.t.c. Much work has been contributed by this
field and a complete other field of computer vision has been introduced to work on
it.
• Hurdle detection is one of the common task that has been done through image
processing, by identifying different type of objects in the image and then
calculating the distance between robot and hurdles.
• Line follower robot : Most of the robots today work by following the line and
thus are called line follower robots. This help a robot to move on its path and
perform some tasks. This has also been achieved through image processing.
34
Hurdle detection Line follower robot
35
COLOR PROCESSING
36
PATTERN RECOGNITION
• Pattern recognition involves study from image processing and from various other
fields that includes machine learning ( a branch of artificial intelligence). In
pattern recognition , image processing is used for identifying the objects in an
images and then machine learning is used to train the system for the change in
pattern. Pattern recognition is used in computer aided diagnosis , recognition of
handwriting , recognition of images e.t.c]
37
VIDEO PROCESSING
• A video is nothing but just the very fast movement of pictures. The quality of the
video depends on the number of frames/pictures per minute and the quality of
each frame being used. Video processing involves noise reduction , detail
enhancement , motion detection , frame rate conversion , aspect ratio
conversion , color space conversion e.t.c.
38
REFERENCES
Thank You
39