0% found this document useful (0 votes)
164 views15 pages

CHP - 1 - Fundamentals of Digital Image Min

The document discusses fundamentals of digital images. It defines a digital image as a 2D function where amplitude at coordinates (x,y) is intensity. Pixels are the smallest elements that make up an image. Relationships between pixels like neighbors and connectivity are important for image analysis. There are different types of digital images like binary, grayscale, and color. Key steps in image processing are importing, analyzing, and outputting images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views15 pages

CHP - 1 - Fundamentals of Digital Image Min

The document discusses fundamentals of digital images. It defines a digital image as a 2D function where amplitude at coordinates (x,y) is intensity. Pixels are the smallest elements that make up an image. Relationships between pixels like neighbors and connectivity are important for image analysis. There are different types of digital images like binary, grayscale, and color. Key steps in image processing are importing, analyzing, and outputting images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1 Fundamentals of Digital Image

1
FUNDAMENTALS OF DIGITAL IMAGE
Unit structure:
1.0 Objectives
1.1. Introduction
1.1.1. What is Digital Image?
1.1.2. Relation between Pixels
1.1.3. Connectivity between Pixels
1.1.4. Distance Measures
1.2. Types of Image Processing.
1.3. Types of Digital Image
1.3.1. Monochrome Image (Binary image)
1.3.2. Grey Scale Image
1.3.3. Colour Image
1.3.4. Half-toned image
1.4. Fundamental Steps in Digital Image Processing
1.5. Components of an Image Processing System
1.6. Summary
1.7. Unit end exercise
1.8. Further Reading

1.0 OBJECTIVES

The chapter objectives are :


 Understand differences between computer vision and image processing.
 Unerstand how image is represented mathamatically.
 Know the types of digital images.
 Understand image types such as binary images, gray-scale images, color and multi-
spectral images.
 Know the basic components of an image processing system
 Understand the model for an image analysis process
 Understand the model for an image analysis process.

www.rocktheit.com www.facebook.com/rocktheit
2 Fundamentals of Digital Image

1.1 INTRODUCTION

 Image processing is a method to convert an image into digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information
from it.
 It is a type of signal dispensation in which input is image and output may be image or
characteristics of that image.
 Image Processing systems treat images as two dimensional signals and apply signal
processing methods to them.
 It is among rapidly growing technologies today, with its applications in various aspects of a
business.
 Image Processing forms core research area within engineering and computer science
disciplines.
 Image processing basically includes the following three steps.
 Importing the image with optical scanner or by digital photography.
 Analysing and manipulating the image which includes data compression and image
enhancement.
 Output is the last stage in which result can be altered image or report that is based on
image analysis.
 Purpose of Image processing:
 The purpose of image processing is divided into 5 groups.
 They are:
1. Visualization - Observe the objects that are not visible.
2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Measurement of pattern – Measures various objects in an image.
5. Image Recognition – Distinguish the objects in an image.

1.1.1 What is Digital Image ?

 We have grown up viewing different images.


 Whatever our eyes see is an image.
 We can categorize these images as analog or digital.
 The real time images which our eye perceives such as painted pictures, the world around
us are analog images.

www.rocktheit.com www.facebook.com/rocktheit
3 Fundamentals of Digital Image

 These images are continuous points (signals) in nature.


 The images which we see in electronic systems such as computers, mobile phones etc. are
digital images.
 These images are discrete points (signals) in nature.
 As we now know the difference between digital and analog image lets represent digital
image as mathematical function.
 Definition:An image may be defined as a two-dimensional function, f(x,y), where x and y
are spatial (plane) coordinates, and the amplitude of f at pair of coordinates (x,y) is called
intensity or grey level of the image at that point.When x, y and amplitude value of f all are
discrete values, we call the image as digital image.
 An image is an array, or a matrix, of square pixels (picture elements) arranged in columns
and rows (hence 2-dimentional).

(0,0)
x

f(x,y)

(a) (b) (c)

Figure 1.1 - a) Digital Image b) Pixel elements c)Pixel intencity values

 A normal greyscale image has 8 bit “colour” depth 28 = 256 greyscales.


 A “true colour” image has 24 bit colour depth = 8 x 8 x 8 bits = 256 x 256 x 256 colours =
~16 million colours.
 There are two general groups of ‘images’:
 Vector graphics ( line art)
 Bitmaps (pixel-based or ‘images’).
 We will discuss grey scale images throughout this book.

 Advantages of Digital Images

 The processing of images is faster and cost effective.


 Digital images can be effectively stored and efficiently transmitted from one place to
another.

www.rocktheit.com www.facebook.com/rocktheit
4 Fundamentals of Digital Image

 Whenever the image is in digital format, the reproduction of the image is both faster and
cheaper.
 When shooting a digital image, one can immediately see if the image is good or not.
 Drawbacks of digital Images
 A digital file cannot be enlarged beyond a certain size without compromising on quality
 The memory required to store and process good quality images is very high.

1.1.2 Relation between Pixels

 f(x,y) is a function for digital image.


 Pixels q, p are two pixels in the image f(x,y).
 Subset of pixels of f(x,y) is denoted by S
 A pixel p at coordinates (x,y) has 2 horizontal and 2 vertical neighbours:(x+1,y), (x-1,y),
(x,y+1), (x,y-1).
 This set of pixels is called the 4-neighbors of p denoted by N4(p).
 Each pixel is a unit distance from (x,y).
 If (x,y) is on the border on the image then some of the neighbours of p lie outside the digital
image x

(x-1,y-1) ( x , y-1 ) (x+1,y-1)


( x-1 , y (x,y) ( x+1 , y )
)
(x- ( x , y+1 ) (x+1,y+1)
y
1,y+1)

Figure 1.2 Pixel Neighbours

 The 4 diagonal neighbors of p are: (ND(p))


(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)
 These points together with the 4-neighbors, are called the 8-neighbors of p, N4(p) + ND(p)
 N8(p)

1.1.3 Connectivity between Pixels

www.rocktheit.com www.facebook.com/rocktheit
5 Fundamentals of Digital Image

 Connectivity between pixels is important because it is used in establishing boundaries of


objects and components of regions in an image.
 Two pixels are connected if:
 They are neighbors (i.e. adjacent in some sense -- e.g. N4(p), N8(p), …).
 Their gray levels satisfy a specified criterion of similarity (e.g. equality, …).
 In a binary image with values 0 and 1, two pixels may be 4-neighbours, but they are said to
be connected only if they have same value.
 V is the set of gray-level values used to define adjacency (e.g. V={1} for adjacency of pixels
of value 1).
 To connect pixels we need to understand adjency of pixels.
Adjency :
 There are three types of adjency :
 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p)
 8-adjacency : Two pixels p and q with values from V are 8- adjacent if q is in the set N8(p).
 m-adjacency (mixed adjacency): Two pixels p and q with values from V are m-adjacent if, q
is in N4(p) or q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels with values from V.
 Mixed adjacency is a modification of 8-adjacency and is used to eliminate the multiple path
connections that often arise when 8-adjacency is used.

Figure 1.3 Pixel Adjency

 Two image subsets S1 and S2 are adjacent if some pixel in S1 is adjacent to some pixel in
S2.
 Path:
 A (digital) path (curve) from pixel p with coordinates (x,y) to pixel q with coordinates (s,t) is
a sequence of distinct pixels:(x0,y0), (x1,y1), …, (xn,yn)where (x0,y0) = (x,y), (xn,yn) = (s,t),
and (xi,yi) is adjacent to (xi-1,yi-1), for 1≤i ≤n ; n is the length of the path.

www.rocktheit.com www.facebook.com/rocktheit
6 Fundamentals of Digital Image

 If (xo, yo) = (xn, yn) ie. if the start point and end point are same than the path is known as a
closed path.
 4-, 8-, m-paths can be defined depending on the type of adjacency specified.

Figure 1.4 Pixel Adjency


 S is a subset of pixels in an image f(x,y).
 If p,q belongs to S, then q is connected to p in S if there is a path from p to q consisting
entirely of pixels in S.
 For any pixel p in S, the set of pixels in S that are connected to p is called a connected
component of S.
 If S has only one connected component then S is called a connected set.

1.1.4 Distance Measures

 For pixels p,q,z with coordinates (x,y), (s,t), (u,v), D is a distance function or metric if:
 D(p,q) ≥ 0 (D(p,q)=0 iff p=q)
 D(p,q) = D(q,p) and
 D(p,z) ≤ D(p,q) + D(q,z)
 Euclidean distance:
De(p,q) = [(x-s)2 + (y-t)2]1/2
 D4 distance (city-block distance) between p & q:
D4(p,q) = |x-s| + |y-t|
 The pixels having D4 distance from (x,y) less than or equal to some value r form a diamond
centered at (x,y).
 e.g. pixels with D4≤2 from p(x,y) (center point).

2
2 1 2
2 1 0 1 2
2 1 2
2
www.rocktheit.com www.facebook.com/rocktheit
7 Fundamentals of Digital Image

 D8 distance (chessboard distance) between p & q:


D8(p,q) = max(|x-s|,|y-t|)
 The pixels with D8 distance from (x,y) less than or equal to some value r form a square
centered at (x,y).
 e.g. pixels with D8≤2 from p(x,y) (center point):
2 2 2 2 2
2 1 1 1 2
The pixels with D = 1 are
2 1 0 1 2 8
2 1 1 1 2 the 8-neighbors of p(x,y)
2 2 2 2 2

 D4 and D8 distances between p and q are independent of any paths that exist between the
points because these distances involve only the coordinates of the points (regardless of
whether a connected path exists between them).
 However, for m-adjacency: The Dm distance between two points is defined as the shortest
m-path between the points.
 The distance between two pixels will depend on the values of the pixels along the path as
well as the values of their neighbors.
 e.g. assume p, p2, p4 = 1
p1, p3 = can have either 0 or 1
p3 p4 If only connectivity of pixels valued 1 is allowed,
and p and p are 0, the m-distance between p
1 3
p1 p2 and p is 2.(PP P )
4 2 4

p If either p or p is 1, the distance is 3.

1.2 TYPES OF IMAGE PROCESSING

 Image processing can be classified in three types : low-level, mid-level and high-level
processing.
 Low-level processing :
 It’s a first step toward image processing.
 Here input and output both are images.

www.rocktheit.com www.facebook.com/rocktheit
8 Fundamentals of Digital Image

 It involves image pre-processing operations such as noise filtering, contrast enhancement,


compressions, and sharpening.
 Mid-Level processing :
 It’s an intermediary step towards image processing.
 Here input is image but output is attributes extracted from the image.
 It involves image processing operations such as image segmentation.
 High-Level processing:
 It’s a step were we analyse the result we are looking for.
 Here input is image but output is analysis results of that image.
 Here image attributes are recognized.

1.3 TYPES OF DIGITAL IMAGE

 Image can be categorised in four types: monochromatic image (binary image), grey scale
image, colour image and half-toned image.
1.3.1 Monochrome Image (Binary image)
 It is a black and white image.
 Each pixel is represented by either 0 bit or 1 bit.
 Here 0 represents black and 1 represents white.
 It is also called as bit mapped image.
1.3.2 Grey Scale Image
 It’s a grey image with 0 to 255 shades of grey level.
 Each pixel is represented by a byte (8 bit, hence 28 = 256 grey shades).
 Here 0 represents black and 255 represents white.
1.3.3 Colour Image
 Different colours are formed using three basic colours viz: Red, Green and Blue.
 Each pixel requires RGB combination and each colour is represented using 8 bits.
1.3.4 Half-Toned Image
 These images are also black and white images but it gives illusion of grey scale image.
 The technique to achieve this illusion is known as half-toning.
 The image matrix is filled with maximum black to get black shade and the level of black
filled blocks changes to get the illusion of grey shades.
 This is mostly done as most of the black and white printers have only black cartridge so
grey scale shades can be produce with such printers.
 Newspapers use this technique.

www.rocktheit.com www.facebook.com/rocktheit
9 Fundamentals of Digital Image

1.4 FUNDAMENTAL STEPS IN DIGITAL IMAGE


PROCESSING

Fundamental Steps of Digital Image Processing : There are some fundamental steps but
as they are fundamental, all these steps may have sub-steps. The fundamental steps are
described below with a neat diagram.

Figure 1.5 Steps involved in an Digital Image Processing


(I) Image Acquisition: This is the first step or process of the fundamental steps of digital
image processing. Image acquisition could be as simple as being given an image that is
already in digital form. Generally, the image acquisition stage involves pre-processing, such as
scaling etc.
(ii) Image Enhancement : Image enhancement is among the simplest and most appealing
areas of digital image processing. Basically, the idea behind enhancement techniques is to
bring out detail that is obscured, or simply to highlight certain features of interest in an image.
Such as, changing brightness & contrast etc.
(iii) Image Restoration : Image restoration is an area that also deals with improving the
appearance of an image. However, unlike enhancement, which is subjective, image restoration
is objective, in the sense that restoration techniques tend to be based on mathematical or
probabilistic models of image degradation.
(iv) Colour Image Processing: Colour image processing is an area that has been gaining
its importance because of the significant increase in the use of digital images over the Internet.
This may include colour modelling and processing in a digital domain etc.

www.rocktheit.com www.facebook.com/rocktheit
10 Fundamentals of Digital Image

(v) Wavelets and Multiresolution Processing : Wavelets are the foundation for
representing images in various degrees of resolution. Images subdivision successively into
smaller regions for data compression and for pyramidal representation.
(vi) Compression : Compression deals with techniques for reducing the storage required to
save an image or the bandwidth to transmit it. Particularly in the uses of internet it is very much
necessary to compress data.
(vii) Morphological Processing : Morphological processing deals with tools for extracting
image components that are useful in the representation and description of shape.
(viii) Segmentation : Segmentation procedures partition an image into its constituent parts or
objects. In general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward
successful solution of imaging problems that require objects to be identified individually.
(ix) Representation and Description : Representation and description almost always
follow the output of a segmentation stage, which usually is raw pixel data, constituting either
the boundary of a region or all the points in the region itself. Choosing a representation is only
part of the solution for transforming raw data into a form suitable for subsequent computer
processing. Description deals with extracting attributes that result in some quantitative
information of interest or are basic for differentiating one class of objects from another.

(x) Object recognition : Recognition is the process that assigns a label, such as, “vehicle”
to an object based on its descriptors.

(xi) Knowledge Base : Knowledge may be as simple as detailing regions of an image


where the information of interest is known to be located, thus limiting the search that has to be
conducted in seeking that information. The knowledge base also can be quite complex, such
as an interrelated list of all major possible defects in a materials inspection problem or an
image database containing high-resolution satellite images of a region in connection with
change-detection applications.

www.rocktheit.com www.facebook.com/rocktheit
11 Fundamentals of Digital Image

Figure 1.6 Levels of image processing

1.5 COMPONENTS OF AN IMAGE PROCESSING


SYSTEM

Figure 1.7 Components involved in Image Processing

Sensors

 Sensors produce an electrical output proportional to light intensity.


 With reference to sensing, two elements are required to acquire digital images.
1. The first is a physical device(sensor) that is sensitive to the energy radiated by the
object we wish to image.
2. The second, called a digitizer, is a device for converting the output of the physical
sensing device into digital form.
3. For instance, in a digital video camera, the sensors produce an electrical output
proportional to light intensity. The digitizer converts these outputs to digital data.

Specialized image processing hardware


 Usually consists of the digitizer, plus hardware that performs other primitive operations,
such as an arithmetic logic unit (ALU).
 One example of how an ALU is used is in averaging images as quickly as they are
digitized, for the purpose of noise reduction.
 This type of hardware sometimes is called a front-end subsystem.

www.rocktheit.com www.facebook.com/rocktheit
12 Fundamentals of Digital Image

 In other words, this unit performs functions that require fast data throughputs (e.g.,
digitizing and averaging video images at 30 frames/s) that the typical main computer
cannot handle.

The Computer
 In an image processing system is a general-purpose computer and can range from a PC to
a supercomputer.
 In dedicated applications, sometimes specially designed computers are used to achieve a
required level of performance, but our interest here is on general-purpose image
processing systems.
 In these systems, almost any well-equipped PC-type machine is suitable for offline image
processing tasks.

Software
 Softwares for image processing consists of specialized modules that perform specific tasks.
 A well-designed package also includes the capability for the user to write code.

Mass storage
 Capability is a must in image processing applications.
 An image of size 1024*1024 pixels, in which the intensity of each pixel is an 8-bit quantity,
requires one megabyte of storage space if the image is not compressed.
 Digital storage for image processing applications falls into three principal categories:
1. Short term storage for use during processing.
2. On-line storage for relatively fast recall.
3. Archival storage, characterized by infrequent access.
 Storage is measured in bytes (eight bits), Kbytes (one thousand bytes), Mbytes (one million
bytes), Gbytes (meaning giga, or one billion, bytes), and T bytes (meaning tera, or one
trillion, bytes).
 One method of providing short-term storage is computer memory.
 Another is by specialized boards, called frame buffers, that store one or more images and
can be accessed rapidly, usually at video rates (e.g., at 30 complete images per second).
 Online storage generally takes the form of magnetic disks or optical-media storage.

Image displays
 In use today are mainly color (preferably flat screen) TV monitors.

Hardcopy devices
 For recording images include laser printers, inkjet units.

www.rocktheit.com www.facebook.com/rocktheit
13 Fundamentals of Digital Image

 But paper is the obvious medium of choice for written material.

Networking
 Networking means exchange of information or services (eg through internet) among
individuals, groups, or institutions.
 Networking is almost a default function in any computer system in use today.
 Because of the large amount of data inherent in image processing applications, the key
consideration in image transmission is bandwidth.

1.6 SUMMARY

 Image Processing systems treat images as two dimensional signals and apply signal
processing methods to them.
 The purpose of image processing is divided into 5 groups, Visualization ,Image sharpening
and restoration, Image retrieval, Measurement of pattern , Image Recognition.
 An image is an array, or a matrix, of square pixels (picture elements) arranged in columns
and rows (hence 2-dimentional).
 There are three types of adjency : 4-adjacency, 8-adjacency and m-adjacency (mixed
adjacency)
 Euclidean distance:
De(p,q) = [(x-s)2 + (y-t)2]1/2
 Image processing can be classified in three types : low-level, mid-level and high-level
processing.
 Image can be categorised in four types: monochromatic image (binary image), grey scale
image, colour image and half-toned image.
 Fundamental Steps of Digital Image Processing :
Image Acquisition, Image Enhancement , Image Restoration, Colour Image Processing,
Wavelets and Multiresolution Processing, Compression, Morphological Processing,
Segmentation, Representation and Description, Object recognition, Knowledge Base.

1.7 UNIT END EXERCISE

1) What is a digital image?


2) Explain the need for image processing.
3) With a neat block diagram, explain the fundamental steps in digital image processing.
4) Explain the differences between image enhancement and restoration.
5) Explain the following terms used in image processing:
a. Adjacency and its types

www.rocktheit.com www.facebook.com/rocktheit
14 Fundamentals of Digital Image

b. Distance measures
c. Connectivity
d. Region
e. Edge
6) Consider the two image subsets, S1 and S2, shown in the following figure. For V={1},
determine whether these two subsets are
(a) 4-adjacent, (b) 8-adjacent, or (c) m-adjacent.

7) Consider the image segment shown.


a. Let V={7, 8} and compute the lengths of the shortest 4-, 8-, and
m-path between p and q. If a particular path does not exist between these two
points, explain why.
b. Repeat for V={5,6,7}.
(p) 7 8 7 6
5698
9786
5 6 7 7(q)
8) With the help of a neat block diagram, explain the components of a general purpose image
processing system.
9) What are the basic relationships between pixels?
With neat diagrams and appropriate mathematical expressions explain:
i) Neighbors
ii) Adjacency
iii) Connectivity
10)Define different types of adjacency & explain how m - adjacency
is different from 8 - adjacency with an example.
11)Write a note on different distance measures.
12)For pixels p, q with co-ordinates (x, y) and (s, t) respectively
find the distance metric D for the following cases:
1. Euclidean distance De (p, q)
2. City block distance D4 (p, q)
3. Chessboard distance Dg (p, q)
13) Write a note on Image Operations on a Pixel Basis

www.rocktheit.com www.facebook.com/rocktheit
15 Fundamentals of Digital Image

1.8 FURTHER READING

1. R.C.Gonsales R.E.Woods, .Digital Image Processing., Second Edition,Pearson Education.


2. B. Chanda, D. Dutta Majumder, .Digital Image Processing and Analysis., PHI.


www.rocktheit.com www.facebook.com/rocktheit

You might also like