0% found this document useful (0 votes)
84 views21 pages

Dip Module 1

Uploaded by

39. D. Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views21 pages

Dip Module 1

Uploaded by

39. D. Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Digital Image Processing 21CS732

MODULE 1
Digital Image Fundamentals
Image Definition: An image is a two-dimensional function, f (x, y) where x and y are spatial
coordinates, and the amplitude represents the intensity or gray level.
Digital Image: When x, y, and intensity values are finite and discrete, the image is called a digital
image. The basic elements of a digital image are called pixels.
Digital Image Processing: This field involves the manipulation of digital images using computers
and encompasses various methods to process images across the entire electromagnetic (EM)
spectrum.
Imaging Beyond Human Vision: Unlike humans, machines can process images generated from
sources across the entire EM spectrum, such as gamma rays, radio waves, ultrasound, and electron
microscopy.
Distinction Between Fields:
 Image Processing: Involves processes where both input and output are images.
 Image Analysis: Focuses on extracting meaningful attributes (e.g., edges, contours, or
objects) from an image.
 Computer Vision: Seeks to emulate human vision, including decision-making and actions
based on visual input, closely linked to artificial intelligence (AI).
Image Processing Continuum: No clear boundaries exist between image processing, image analysis,
and computer vision, but processes can be categorized into:
1. Low-level processes: Basic operations (e.g., noise reduction, contrast enhancement), where
both input and output are images.
2. Mid-level processes: Involves image segmentation, object description, and classification,
where output is attributes (e.g., edges, contours).
3. High-level processes: Involves making sense of recognized objects, associated with image
analysis and computer vision.
Overlap Between Fields: A key overlap between image processing and analysis is the recognition of
individual regions or objects.
Applications of Digital Image Processing: Used in a wide range of areas with significant social and
economic value, including text analysis, object recognition, and more complex vision-based tasks.
History of DIP
 First Digital Image Application: Early 1920s, in the newspaper industry, images were
transmitted between London and New York via submarine cable using the Bartlane system.
Transmission time reduced from weeks to hours.
 Bartlane System: Initial digital pictures were reproduced on a telegraph printer with special
typefaces simulating halftones, later replaced by photographic reproduction from perforated
tapes.
 Improvements: Bartlane systems evolved from coding images in 5 gray levels (1921) to 15
gray levels (1929), improving total quality and resolution.

1
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

 Digital Image Processing: While these images were transmitted digitally, they weren’t
considered digital image processing as no computers were involved.
 Tied to Computer Development: Digital image processing evolved with the development of
computers, requiring significant storage and computational power.
 Computer Milestones: Key advances contributing to image processing include:
1. Invention of the transistor (1948),
2. High-level programming languages (1950s-60s),
3. Integrated circuits (1958),
4. Microprocessor development (1970s),
5. IBM PC (1981),
6. Miniaturization of components (LSI, VLSI, ULSI).
 Early Image Processing Tasks: The first meaningful image processing tasks began in the
1960s, with space programs like NASA’s Ranger 7 in 1964 processing images of the moon to
correct distortions.
 Expansion of Applications: From the late 1960s and early 1970s, image processing
expanded into medical imaging (CT scans), remote sensing, astronomy, industry, biology, and
more.
 Medical Imaging Milestone: The invention of Computerized Axial Tomography (CAT/CT
scans) in the 1970s revolutionized medical diagnosis.
 Other Applications: Image processing techniques now span many fields, such as X-ray
interpretation, pollution tracking, archeology, high-energy physics, electron microscopy, and
more.
 Machine Perception: Image processing is used for machine-based tasks like automatic
character recognition, machine vision in industry, fingerprint processing, satellite image
analysis, and more.
 Continued Growth: The decrease in computer costs and the rise of the Internet has expanded
digital image processing applications dramatically.
Examples of fields that use DIP
1. Gamma Rays (10^4 to 10^3 eV)
 Application: Gamma rays are primarily used in medical imaging (like PET scans), cancer
therapy (radiotherapy), and sterilization of medical equipment.
 Dipole Interaction: Gamma rays can interact with the nucleus of atoms. Dipole interactions
are less common at these high energies, as they are more likely to cause ionization and
nuclear reactions rather than traditional dipole interactions.
2. X-rays (10^3 to 10^1 eV)
 Application: X-rays are commonly used in medical imaging (such as CT scans and X-ray
radiography), security scanning, and materials analysis.

2
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

 Dipole Interaction: In X-rays, dipole interactions happen through the photoelectric effect,
where X-ray photons eject electrons from atoms. X-rays cause dipole oscillations in atoms
that result in the emission of lower-energy photons.
3. Ultraviolet (UV) Radiation (10^1 to 10^2 eV)
 Application: UV light is used in sterilization, water purification, and forensic analysis. It also
plays a role in the production of Vitamin D in the human body.
 Dipole Interaction: UV radiation induces electronic transitions in molecules. This can excite
electrons, leading to fluorescence or photochemical reactions, which involve dipole
interactions in atoms and molecules.
4. Visible Light (~1 eV)
 Application: Visible light is essential for vision, illumination, photography, and optical
communications.
 Dipole Interaction: In this region, dipole interactions are common in light absorption and
emission processes. Molecules and atoms absorb light when the oscillating electric field
interacts with dipole moments, causing transitions between electronic energy levels.
5. Infrared Radiation (10^-1 to 10^-2 eV)
 Application: Infrared is used in thermal imaging, night vision, remote controls, and
spectroscopy.
 Dipole Interaction: Infrared radiation excites vibrational modes in molecules with dipole
moments. This interaction is essential for infrared spectroscopy, where absorption patterns
reveal molecular structures.
6. Microwaves (10^-2 to 10^-4 eV)
 Application: Microwaves are used in communication (Wi-Fi, satellite, mobile phones), radar,
and microwave ovens.
 Dipole Interaction: Microwaves cause dipole rotation in polar molecules (like water). The
alternating electric field of microwaves makes polar molecules rotate, producing heat in
applications like microwave cooking.
7. Radio Waves (10^-4 eV and lower)
 Application: Radio waves are used in broadcasting, telecommunications, and radar systems.
 Dipole Interaction: Radio waves interact with dipoles in antennas. The oscillating electric
field in radio waves induces current in the dipole antennas, which is the basis for radio
transmission and reception.
Fundamental steps in digital image processing
These methods can broadly be categorized into two groups:
1. Methods where both input and output are images: These include image acquisition,
enhancement, restoration, color processing, wavelet processing, compression, and
morphological processing. The goal of these methods is to improve, modify, or extract useful
visual information from the images.

3
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

2. Methods where inputs may be images, but outputs are attributes extracted from the
images: This includes tasks like segmentation, representation, description, and object
recognition. These methods analyze image data to extract meaningful features or perform
specific tasks, such as identifying objects within the images.

1. Image Acquisition:
o The process of obtaining an image, typically by capturing it via a camera or sensor.
This stage may include preprocessing such as scaling.
2. Image Enhancement:
o This is about manipulating an image to make it more suitable for a specific
application. It is subjective, as the method depends on the viewer’s judgment of the
result.
o Techniques are introduced with examples like enhancing X-ray or satellite images.
3. Image Restoration:
o Focuses on improving an image’s appearance based on objective mathematical or
probabilistic models of degradation.
4. Color Image Processing:
o Involves the use of color models to process and analyze color images. With the
growing use of digital images on the internet, color image processing has gained
importance.
5. Wavelets and Multiresolution Processing:
o Wavelets allow for image representation at various resolutions and are commonly
used for image compression and pyramidal representations.
6. Compression:

4
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

o Techniques to reduce the storage and bandwidth required for image transmission,
such as JPEG.
7. Morphological Processing:
o Focuses on extracting image components that are useful for shape representation and
description.
8. Segmentation:
o One of the most important and challenging tasks, segmentation involves partitioning
an image into constituent parts or objects. Successful segmentation is crucial for
subsequent recognition tasks.
9. Representation and Description:
o After segmentation, this stage involves converting raw pixel data into a form that can
be processed by a computer, focusing on boundary or region-based representation and
feature selection.
10. Object Recognition:
o Assigns labels to objects based on their attributes. This is the final step in many image
processing systems where specific objects are identified within the image.
Knowledge Base:
The knowledge base supports the operation of all modules, guiding the image processing tasks and
providing prior knowledge about the problem domain. This could range from simple details like
regions of interest in the image to complex interrelated data such as defect types in materials or
satellite images.
Components of an Image Processing System

5
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

A general-purpose image processing system, composed of several key components:


1. Image Sensing: This involves two parts— (1) a physical sensing device sensitive to the
energy radiated by the object, and (2) a digitizer that converts the sensor's output into digital
form.
2. Specialized Image Processing Hardware: Includes a digitizer and an Arithmetic Logic Unit
(ALU) that performs rapid arithmetic and logical operations on images, crucial for tasks like
noise reduction.
3. Computer: The central component can range from a PC to a supercomputer, with custom
computers being used for high-performance applications.
4. Software: Specialized modules perform specific image processing tasks. The software allows
users to write or integrate custom code, sometimes using general-purpose programming
languages.
5. Mass Storage: Essential for storing vast amounts of image data, storage is categorized as
short-term (during processing), online (for fast access), and archival (mass storage with
infrequent access). Various technologies such as frame buffers, magnetic disks, and optical
media are used.
6. Image Displays: Primarily color TV monitors driven by graphics cards, with options for
stereo displays in specific applications.
7. Hardcopy Devices: Devices like laser printers, film cameras, and inkjet units record
processed images. Film is preferred for the highest resolution, but paper is commonly used for
presentations.
8. Networking: A crucial function, especially for transmitting large image files. Bandwidth is
the primary concern in image transmission, particularly over the Internet, but improvements
in broadband technologies are mitigating this issue.
Elements of Visual Perception
Components of the Eye:
1. Cornea & Sclera: The cornea is a transparent, tough tissue covering the front of the eye,
while the sclera is an opaque membrane that encloses the rest of the eye.
2. Choroid: This membrane lies under the sclera and contains blood vessels, which provide
nutrients to the eye. It also reduces extraneous light and prevents backscatter within the eye.
3. Iris & Pupil: The iris, containing pigments, controls the amount of light entering the eye by
adjusting the size of the pupil (2-8 mm).
4. Lens: The lens is composed of fibrous cells and contains water, fat, and proteins. It focuses
light on the retina and can absorb harmful infrared and ultraviolet light. With aging, it may
develop clouding, leading to conditions like cataracts.
5. Retina: The innermost membrane, where light is focused, contains two types of receptors:
cones and rods.

6
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

Cones and Rods:


 Cones (6-7 million per eye) are concentrated in the central retina (fovea) and are responsible
for color vision and fine details. Each cone connects to its own nerve, supporting detailed
visual acuity, known as photopic (bright-light) vision.
 Rods (75-150 million per eye) are distributed throughout the retina, but several rods connect
to one nerve, providing a general field of view with less detail. Rods are sensitive to low light
and are responsible for scotopic (dim-light) vision.
Visual Perception:
 The density of cones is highest at the fovea, providing the most detailed vision. Rods
dominate the peripheral retina and peak at about 20° off-axis from the visual center before
tapering toward the periphery.
 The human eye has a region known as the blind spot due to the absence of receptors where
the optic nerve exits the eye.
Comparing Human and Digital Imaging:
 In the fovea, the density of cones is approximately 150,000 per mm², providing a high level of
visual detail. This is comparable to a medium-resolution charge-coupled device (CCD)
imaging sensor, which can have a similar number of elements in a 5 mm x 5 mm array.
 While human vision integrates experience and intelligence with visual data, the resolving
power of the human eye is comparable to modern imaging sensors in terms of raw capacity
for detail detection.
Image Sensing and Acquisition
In image sensing and acquisition, the process involves capturing an image by using sensors that detect
the energy from an "illumination" source, which reflects off or passes through the elements of a
"scene." These terms, illumination and scene, are used in a broad sense. The illumination can come
from electromagnetic energy sources, such as visible light, radar, infrared, or X-rays. It can even come
from unconventional sources like ultrasound or computer-generated patterns. Similarly, the scene
elements could range from everyday objects to more complex structures like molecules, rock
formations, or human tissues.
The basic concept is that illumination energy interacts with objects either by being reflected or
transmitted through them. For instance, light can reflect off a surface to create an image, or X-rays can
pass through the human body to generate medical imagery. The energy is captured and focused onto a
sensor, which converts the energy into a measurable voltage signal.
Three main types of sensor arrangements that convert illumination energy into digital images:
 Single imaging sensor: A single sensor detects the incoming energy and generates a voltage
response, which is then digitized.
 Line sensor: A linear array of sensors captures data along one line of the image at a time,
which is useful for scanning-type applications.
 Array sensor: A 2D array of sensors captures the entire scene simultaneously, ideal for real-
time or full-frame imaging.

7
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

The voltage signals from these sensors are then digitized, transforming the analog responses into
digital image data. This process forms the foundation for modern digital imaging in fields ranging
from photography to medical diagnostics and satellite imagery.
Image Acquisition Using a Single Sensor

Photodiode Sensor: A widely used sensor made from silicon that outputs a voltage waveform
proportional to the light it detects.
Filter for Selectivity: Filters, like a green filter, enhance the sensor's selectivity by emphasizing
specific wavelengths of light, resulting in a stronger response for that color.
2D Image Generation: To create 2D images using a single sensor, relative motion in both x- and y-
directions between the sensor and the area being imaged is required.
High-Precision Scanning: A common method involves mounting the object on a rotating drum for
one-dimensional motion, while the sensor moves perpendicularly, offering accurate but slow
scanning.
Laser-Based Imaging: Some systems use a laser combined with moving mirrors to scan and direct
reflected light back to the sensor, applicable with strip or array sensors for advanced imaging.

8
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

Image Acquisition Using Sensor Strips

1. Sensor Strip Arrangement: A commonly used geometry involves a strip of sensors arranged
.This sensor strip provides imaging in one direction, while motion perpendicular to the strip
completes the second dimension, forming a 2D image.
2. Application in Flatbed Scanners: Flatbed scanners use in-line sensors to capture high-
resolution images. Devices can have 4000 or more sensors, with the strip capturing one image
line at a time, and the motion of the scanner completing the image in the perpendicular
direction.
3. Airborne Imaging: In applications such as airborne imaging, in-line sensors are mounted on
aircraft, capturing one line of an image as the plane moves over a geographical area. This
motion creates a 2D image of the terrain by responding to different electromagnetic bands.
4. Ring Configuration for 3D Imaging: In medical and industrial imaging, sensors are
arranged in a ring around the object .A rotating X-ray source illuminates the object, and
sensors collect the X-ray energy, producing cross-sectional (slice) images.
5. CAT Imaging and Processing: This setup forms the basis of computerized axial tomography
(CAT), which requires processing algorithms to transform sensor data into cross-sectional
images. Techniques like MRI and PET use similar principles but with different illumination
sources and sensors, resulting in 3D digital volumes of stacked images.

9
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732

Image Acquisition Using Sensor Arrays


2D Sensor Arrays: These arrays, commonly used in digital cameras and other sensing devices, allow
for capturing complete images without motion, unlike single or linear sensors.
CCD Sensors: Charge-Coupled Device (CCD) sensors are commonly used in 2D arrays, offering
versatility and durability for light-sensing applications like digital cameras.
Light Integration: Each sensor in the array integrates the incoming light energy over time, making it
ideal for low-noise imaging, such as in astronomy, where long exposure times are required.
Optical Lens Focus: The energy from an illumination source is focused onto the sensor array by an
optical lens, projecting the scene onto the focal plane, where the sensors capture the image.
Digital Image Creation: After the sensor array captures the light, digital circuitry processes the
outputs and converts them into a digital image, completing the process of image acquisition.

10
Koustav Biswas , Dept. Of CSE , DSATM

You might also like