0% found this document useful (0 votes)
92 views11 pages

Dip Case Study

Digital image processing involves applying computer algorithms to digital images. There are three levels of image processing - low level involves basic operations like noise reduction, mid level involves tasks like segmentation and object recognition, and high level involves making sense of recognized objects through computer vision. Mass storage is important for image processing applications as images require significant storage space. There are three categories of digital storage - short term for processing, online for fast retrieval, and archival for infrequent access. Some common applications of image processing are face detection and remote sensing.

Uploaded by

Vishakh Shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views11 pages

Dip Case Study

Digital image processing involves applying computer algorithms to digital images. There are three levels of image processing - low level involves basic operations like noise reduction, mid level involves tasks like segmentation and object recognition, and high level involves making sense of recognized objects through computer vision. Mass storage is important for image processing applications as images require significant storage space. There are three categories of digital storage - short term for processing, online for fast retrieval, and archival for infrequent access. Some common applications of image processing are face detection and remote sensing.

Uploaded by

Vishakh Shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 11

DIGITAL IMAGE PROCESSING

Submitted by: Thanmai Reddy B - 1NT17IS181


Suprabha Bhat - 1NT17IS169
T. Swechha Ramaiah - 1NT17IS176
Vishakh - 1NT17IS197

1. What is digital image processing?What are the different levels


of processing image?

Digital image processing is the use of computer algorithms to


perform image processing on digital images. It allows a much
wider range of algorithms to be applied to the input data and can
avoid problems such as the build-up of noise and signal distortion
during processing.

There are three different levels of image processing:

·0 Low-level processing- It involves primitive operation such as


image preprocessing to reduce noise,contrast
enhancement,image sharpening,etc. In the low-level
process,both input and output are images.

·1 Mid-level processing- It involves tasks such as image


segmentation,description of images,object recognition,etc.
In the mid-level process, inputs are generally images but its
outputs are generally image attributes.

·2 High-level processing- It involves "making sense" from a


group of recognized objects.this process is normally
associated with computer vision.

8. Explain mass storage capability in image processing


applications and also its principal categories.

·3 Mass storage capability is a must in a image processing


applications. And image of sized 1024 * 1024 pixels requires
one megabyte of storage space if the image is not
compressed.

·4 Digital storage for image processing applications falls into


three principal categories:

1. Short-term storage for use during processing.

2. on line storage for relatively fast recall

3. Archival storage, characterized by infrequent access.

·5 One method of providing short-term storage is computer


memory. Another is by specialized boards, called frame
buffers, that store one or more images and can be accessed
rapidly.

·6 The on-line storage method, allows virtually instantaneous


image zoom, as well as scroll (vertical shifts) and pan
(horizontal shifts). On-line storage generally takes the form
of magnetic disks and optical-media storage. The key factor
characterizing on-line storage is frequent access to the
stored data.

·7 Finally, archival storage is characterized by massive storage


requirements but infrequent need for access.

15. Explain any two applications of image processing.

·8 Face Detection-In this method important facial features


are detected and else are ignored. Face detection can be
treated as a specific case of object class detection. The
objective of face detection is to find the specified features
such locations and sizes of a known number of faces.
Various face detection algorithms are focused on the
detection of frontal human faces. It is also an attempt
to solve the more general and difficult problems of multi
view face detection.

·9 Remote Sensing-Remote sensing is basically an


acquisition of small or large scale information signals
from an object or phenomenon, by the using various real-
time sensing devices that are wireless in nature, or not in
physical or direct contact with the object (such as
aircraft, spacecraft, satellite or ship). Practically remote
sensing is a collection of different data signals using
variety of devices for gathering informationon a given
object or area. The monitoring of a parolee using an
ultrasound identification system, Magnetic Resonance
Imaging (MRI), Positron Emission Tomography (PET), X-
radiation (X-ray) and space probes are all examples of
remote sensing.

·10 Biomedical Image Enhancement & Analysis -Biomedical


image enhancement is very important issue forbiomedical
image diagnosis, the aim of this area is to enhance the
biomedical images. In addition to originally digital
methods, such as Computed Tomography (CT) or
Magnetic Resonance Imaging (MRI), initially analog
imaging modalities such as traditional applications like
endoscopy or radiography are nowadays equipped with
digital sensors. Digital images are composed by
individual pixels to which points to discrete brightness or
different color values. After biomedical image
enhancement & proper analysis, they can be efficiently
processed & objectively evaluated.

23. Briefly explain the following terms:

a) Neighbors b) Path c) Connectivity

a) Neighbors -

A pixel p at coordinates (x, y) has four horizontal and vertical


neighbors whose coordinates are given by (x+1, y), (x-1, y), (x,
y+1), (x, y-1). This set of pixels, called the 4-neighbors of p, is
denoted by N4 (p). Each pixel is a unit distance from (x, y), and
some of the neighbors of p lie outside the digital image if (x, y) is
on the border of the image.

The four diagonal neighbors of p have coordinates (x+1, y+1), (x+1,


y-1), (x-1, y+1), (x-1, y-1) and are denoted by ND (p). These points,
together with the 4-neighbors, are called the 8-neighbors of p,
denoted by N8 (p). As before, some of the points in ND (p) and
N8 (p) fall outside the image if (x, y) is on the border of the image.

b) Path -

A path from pixel p with coordinate ( x, y) with pixel q with


coordinate ( s, t) is a sequence of distinct sequence with coordinates
(x0, y0), (x1, y1), ….., (xn, yn) where

(x, y) = (x0, y0) & (s, t) = (xn, yn)


Closed path: (x0, y0) = (xn, yn)

Example 1: Consider the image segment shown in figure. Compute


length of the shortest-4, shortest-8 & shortest-m paths between
pixels p & q where, V = {1, 2}

c)Connectivity-

Connectivity between pixels is a fundamental concept that simplifies


the definition of numerous digital image concepts, such as regions
and boundaries. To establish if two pixels are connected, it must be
determined if they are neighbors and if their gray levels satisfy a
specified criterion of similarity (say, if their gray levels are equal). For
instance, in a binary image with values 0 and 1, two pixels may be 4-
neighbors, but they are said to be connected only if they have the
same value.

Let V be the set of gray-level values used to define adjacency. In a


binary image, V={1} if we are referring to adjacency of pixels with
value 1. In a grayscale image, the idea is the same, but

set V typically contains more elements. For example, in the


adjacency of pixels with a range of

possible gray-level values 0 to 255, set V could be any subset of


these 256 values. We consider

three types of adjacency:

(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent
if q is in the set N4 (p).

(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent
if q is in the set N8 (p).

(c) m-adjacency (mixed adjacency).Two pixels p and q with values


from V are m-adjacent if

(i) q is in N4 (p), or

(ii) q is in ND (p) and the set has no pixels whose values are from V.

30) Let p and q be two pixels at coordinates (100,120) and


(130,160) respectively.Compute

1)Eucledian distance

2)Cite Block distance


3)Chess board distance

Ans-

1) Eucledian distance = ((x-s)^2 +(y-t)^2 )^1/2

D(p,q)e= ( (100-5)^2 + (120-160)^2 ) )^1/2

=(10625)^1/2

= 103.0776

2) City Block distance = | x - s | + | y - t|

D4(p,q) =| 100 - 30 | + | 120 - 160 |

= 30 + 40

= 70

3) Chess Board distance = max ( | x - s | , | y - t| )

D8(p,q) = max ( | 100 - 30 | , | 120 - 160 | )

= max (30, 40)

= 40

45) Explain in brief any point processing technique implemented in


image processing.

Log Transformations:
Fig - Some basic gray-level transformation functions used for image enhancement

The general form of the log transformation shown in Fig above is

s= c log( 1 + r)

where c is a constant, and it is assumed that r ≥ 0.The shape of


the log curve in Fig. shows that this transformation maps a narrow
range of low gray-level values in the input image into a wider
range of output levels.The opposite is true of higher values of
input levels.We would use a transformation of this type to expand
the values of dark pixels in an image while compressing the
higher-level values.The opposite is true of the inverse log
transformation.

Any curve having the general shape of the log functions shown in
Fig. would accomplish this spreading/compressing of gray levels in
an image. In fact, the power-law transformations discussed in the
next section are much more versatile for this purpose than the log
transformation. However, the log function has the important
characteristic that it compresses the dynamic range of images
with large variations in pixel values. A classic illustration of an
application in which pixel values have a large dynamic range is the
Fourier spectrum. At the moment,we are concerned only with the
image characteristics of spectra. It is not unusual to encounter
spectrum values that range from 0 to or higher.While processing
numbers such as these presents no problems for a computer,
image display systems generally will not be able to reproduce
faithfully such a wide range of intensity values. The net effect is
that a significant degree of detail will be lost in the display of a
typical Fourier spectrum.

51) What is image histogram? How does the histogram of follwing


images look like

a. Dark b.Bright c. Low contrast d. High contrast

An image histogram is a type of histogram that acts as a graphical


representation of the tonal distribution in a digital image. It plots
the number of pixels for each tonal value.

You might also like