Q.1 What Is A Digital Image Processing? Explain in Brief. and Origins of Digital Image Processing
Q.1 What Is A Digital Image Processing? Explain in Brief. and Origins of Digital Image Processing
And origins of
digital image processing.
Ans.
1. The human eye-brain mechanism produces the best imaging system. An image is an
object or visual which one sees.
2. It is a 2-dimensional function of a 3-dimensional 2 IMAGE PROCESSING world that
surrounds us.
3. images are 2-D light intensity function f(x, y) where x and y are spatial or plane
co-ordinates and the amplitude at any coordinates pair (x, y) is defined as the intensity or
gray level of the image at that point.
4.If x, y and the intensity values are all finite and discrete, then the image is known as a
digital image.
5.The digital image is composed of a finite number of elements which has a particular
location and value. These elements are called picture elements or pixels or pels.
6.Digital Image Processing is modification and enhancement of images by applying various
filtering and enhancement techniques for perceiving better visual information and perform
various analysis on the images.
7. Image processing accepts images as inputs and generates a modified image as output for
human perception or the modified image may provide useful information.
1. Gamma-ray Imaging: Imaging based on gamma rays are mostly for nuclear medicine,
astronomical observations. In this type of imaging, images are produced from the emissions
collected from gamma-ray detectors.
2. X-ray Imaging: X-ray imaging is used mainly for medical diagnostics and industrial
imaging. It is also used for astronomical applications.
The list of applications is limited only for writing here, the applications of digital image
processing is wide and the scope is large.
1. Image acquisition: The image is captured by a sensor (eg. Camera), and digitized if
the output of the camera or sensor is not already in digital form, using
analogue-to-digital converter.
2. Image Enhancement: The process of manipulating an image so that the result is
more suitable than the original for specific applications. The idea behind
enhancement techniques is to bring out details that are hidden, or simple to highlight
certain features of interest in an image
3. Image Restoration: Improving the appearance of an image. Based on mathematical
or probabilistic models of image degradation.
4. Colour Image Processing: Use the colour of the image to extract features of
interest in an image.
5. Wavelets: Are the foundation of representing images in various degrees of
resolution. It is used for image data compression.
6. Compression: Techniques for reducing the storage required to save an image or the
bandwidth required to transmit it.
7. Morphological Processing: Deals with tools for extracting image components that
are useful in the representation and description of shape. In this step, there would be
a transition from processes that output images, to processes that output image
attributes.
8. Segmentation: Segmentation partitions an image into its constituent parts or
objects.
9. Feature extraction: It consists of feature detection and feature description. Feature
detection refers to finding a feature in an image, region or boundary. Feature
description assigns quantitative attributes to the detected features.
10. Image pattern classification: The process that assigns a label to an object based
on its feature descriptors. There are various classification algorithms like correlation,
Bayes classifiers to identify and predict the class label for the object.
4. Image Processing Software: The software performs tasks with the help of
specialized modules. There are many software’s available commercially.
6. Image Displays: Monitors are driven by the outputs of the image and
graphics display cards that are an integral part of a computer system. There
are variants in monitor displays.
7. Hardcopy devices: Used for recording images, include laser printers, film
cameras, heatsensitive devices, inkjet units and digital units, such as optical
and CD-ROM disks.
1. The first part of the visual system is the eye. Its form is nearly spherical
and its diameter is approximately 20 mm. Its outer cover consists of the
‘cornea' and ‘sclera'
2. The cornea is a tough transparent tissue in the front part of the eye. The
sclera is an opaque membrane, which is continuous with cornea and
covers the remainder of the eye.
3. Directly below the sclera lies the “choroids”, which has many blood
vessels. At its anterior extreme lies the iris diaphragm.
4. The light enters in the eye through the central opening of the iris, whose
diameter varies from 2mm to 8mm, according to the illumination
conditions.
5. Behind the iris is the “lens” which consists of concentric layers of fibrous
cells and contains up to 60 to 70% of water.
6. Its operation is similar to that of the man-made optical lenses. It focuses
the light on the “retina” which is the innermost membrane of the eye.
7. Retina has two kinds of photoreceptors: cones and rods. The cones are
highly sensitive to color.
8. Rods serve to view the general picture of the vision field. They are
sensitive to low levels of illumination and cannot discriminate colors.
2) Image formation in the eye :
● In the human eye, the distance between the centre of the lens and the
imaging sensor is fixed.
● Lens in the eye is flexible
● Shape controlled by muscles
● To focus on distance objects – Muscles flatten lens
● To focus on close objects – Muscles allow lens to thicken1
3) Brightness adaptation and discrimination:
● Digital Images are displayed as a discrete set of intensity
● Eye’s ability to discriminate intensities at a given adaptation level is an
important consideration when displaying images.
● Range of brightness's that can be discriminated simultaneously is small
in comparison to total adaptation range.
● For a given set of conditions the current sensitivity level of the visual
system is called the brightness adaptation level.
Q.6 What is Image sensing and acquisition ?
Ans.
1. Images are generated by the combination of illumination of source and
the reflection or absorption of energy from that source by element of the
scene being imaged.
2. There are three principal sensor arrangements that can be used to
transform incident energy into digital images.
3. Incoming energy is transformed into a voltage pulse by input electric
power and sensor response where a digital quantity is obtained by
digitizing the response.
4. Three Diagrams are here……….
5. The most common sensor is the photodiode constructed of silicon
materials and output voltage waveform proportional to light. Using a
filter in front of the sensor improves its selectively.
6. In order to generate a 2-D image using a single sensor, there have to be
relative displacements in both the x- and y-directions between the
sensor and the area to be imaged.
7. An arrangement used in high precision scanning, where a film negative
is mounted onto a drum whose mechanical rotation provides
displacement in one dimension.
8. The single sensor is mounted on a lead screw that provides motion in
the perpendicular direction.
9. Since mechanical motion can be controlled with high precision, this
method is an inexpensive (but slow) way to obtain high-resolution
images.
I. There is a continuous image along the line segment AB. To sample this
function, we take equally spaced samples along line AB.
II. The location of each sample is given by a vertical tick back (mark) in the
bottom part. The samples are shown as block squares superimposed on
function the set of these discrete locations gives the sampled function.
III. In order to form a digital image, the gray level values must also be converted
(quantized) into discrete quantities.
IV. So, we divide the gray level scale into eight discrete levels ranging from black
to white. The vertical tick mark assigns the specific value assigned to each of
the eight level values.
V. The continuous gray levels are quantized simply by assigning one of the eight
discrete gray levels to each sample.
VI. The assignment it made depending on the vertical proximity of a simple to a
vertical tick mark.
1. The first industry to use digital images was the newspaper industry.
2. An image is an object or visual which one sees.
3. If x, y and the intensity values are all finite and discrete, then the image is known as
a digital image
4. The digital image is composed of a finite number of elements which has a particular
location and value.
5. Imaging based on gamma rays are mostly for nuclear medicine, astronomical
observations.
6. X-ray imaging is used mainly for medical diagnostics and industrial imaging. It is
also used for astronomical applications.
7. Ultraviolet Imaging is used for lithography, industrial inspection, microscopy, lasers,
biological imaging and astronomical observations.
8. The image is captured by a sensor like camera or any analog device.
9. Image Restoration Deals with improving the appearance of an image and it is based
on mathematical or probabilistic models of image degradation.
10. Wavelets are the foundation of representing images in various degrees of resolution.
11. Wavelets is used for image data compression and for representation of images in
smaller regions.
12. Compression Deals with various techniques used for reducing the storage required
to save an image in digital form or the bandwidth required to transmit the images
13. Morphological Processing Deals with tools for extracting image components that
are useful in the representation and description of shape.
14. Autonomous segmentation is the most difficult tasks in digital image processing
15. Feature detection refers to finding a feature in an image, region or boundary.
16. There are various classification algorithms like correlation, Bayes classifiers to
identify and predict the class label for the object.
17. In image sensors two subsystems are required to acquire digital images: physical
device, digitizer
18. Physical device that is sensitive to the energy radiated by the object we wish to
image (Sensor).
19. a digitizer is a device for converting the output of the physical sensing device into
digital form.
20. The computer in an image processing system is a general-purpose computer and
can range from a PC to a supercomputer.
21. Monitors are driven by the outputs of the image and graphics display cards that are
an integral part of a computer system.
22. Transmission bandwidth has improved due to optical fibre and other cloud
technologies.
23. The first part of the visual system is the eye.
24. eye’s form is nearly spherical and its diameter is approximately 20 mm. Imp
25. eye’s outer cover consists of the ‘cornea' and ‘sclera'
26. Retina has two kinds of photoreceptors: cones and rods.
27. Digital Images are displayed as a discrete set of intensity
28. In the human eye, the distance between the centre of the lens and the imaging
sensor is fixed
29. The quantity ΔIc/I, ΔIc is the increment of illumination discriminable 50% of the time
with background illumination I which is constant, is called the Weber Ratio.
30. The most common sensor is the photodiode constructed of silicon materials and
output voltage waveform proportional to light.
31. The single sensor is mounted on a lead screw that provides motion in the
perpendicular direction.
32. An image may be continuous with respect to the x and y coordinates and also in
amplitude.
33. Digitalizing the coordinate values is called sampling.
34. Digitalizing the amplitude values is called quantization.
35. In order to form a digital image, the gray level values must also be converted
(quantized) into discrete quantities.