Dip Module 1
Dip Module 1
MODULE 1
Digital Image Fundamentals
Image Definition: An image is a two-dimensional function, f (x, y) where x and y are spatial
coordinates, and the amplitude represents the intensity or gray level.
Digital Image: When x, y, and intensity values are finite and discrete, the image is called a digital
image. The basic elements of a digital image are called pixels.
Digital Image Processing: This field involves the manipulation of digital images using computers
and encompasses various methods to process images across the entire electromagnetic (EM)
spectrum.
Imaging Beyond Human Vision: Unlike humans, machines can process images generated from
sources across the entire EM spectrum, such as gamma rays, radio waves, ultrasound, and electron
microscopy.
Distinction Between Fields:
Image Processing: Involves processes where both input and output are images.
Image Analysis: Focuses on extracting meaningful attributes (e.g., edges, contours, or
objects) from an image.
Computer Vision: Seeks to emulate human vision, including decision-making and actions
based on visual input, closely linked to artificial intelligence (AI).
Image Processing Continuum: No clear boundaries exist between image processing, image analysis,
and computer vision, but processes can be categorized into:
1. Low-level processes: Basic operations (e.g., noise reduction, contrast enhancement), where
both input and output are images.
2. Mid-level processes: Involves image segmentation, object description, and classification,
where output is attributes (e.g., edges, contours).
3. High-level processes: Involves making sense of recognized objects, associated with image
analysis and computer vision.
Overlap Between Fields: A key overlap between image processing and analysis is the recognition of
individual regions or objects.
Applications of Digital Image Processing: Used in a wide range of areas with significant social and
economic value, including text analysis, object recognition, and more complex vision-based tasks.
History of DIP
First Digital Image Application: Early 1920s, in the newspaper industry, images were
transmitted between London and New York via submarine cable using the Bartlane system.
Transmission time reduced from weeks to hours.
Bartlane System: Initial digital pictures were reproduced on a telegraph printer with special
typefaces simulating halftones, later replaced by photographic reproduction from perforated
tapes.
Improvements: Bartlane systems evolved from coding images in 5 gray levels (1921) to 15
gray levels (1929), improving total quality and resolution.
1
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
Digital Image Processing: While these images were transmitted digitally, they weren’t
considered digital image processing as no computers were involved.
Tied to Computer Development: Digital image processing evolved with the development of
computers, requiring significant storage and computational power.
Computer Milestones: Key advances contributing to image processing include:
1. Invention of the transistor (1948),
2. High-level programming languages (1950s-60s),
3. Integrated circuits (1958),
4. Microprocessor development (1970s),
5. IBM PC (1981),
6. Miniaturization of components (LSI, VLSI, ULSI).
Early Image Processing Tasks: The first meaningful image processing tasks began in the
1960s, with space programs like NASA’s Ranger 7 in 1964 processing images of the moon to
correct distortions.
Expansion of Applications: From the late 1960s and early 1970s, image processing
expanded into medical imaging (CT scans), remote sensing, astronomy, industry, biology, and
more.
Medical Imaging Milestone: The invention of Computerized Axial Tomography (CAT/CT
scans) in the 1970s revolutionized medical diagnosis.
Other Applications: Image processing techniques now span many fields, such as X-ray
interpretation, pollution tracking, archeology, high-energy physics, electron microscopy, and
more.
Machine Perception: Image processing is used for machine-based tasks like automatic
character recognition, machine vision in industry, fingerprint processing, satellite image
analysis, and more.
Continued Growth: The decrease in computer costs and the rise of the Internet has expanded
digital image processing applications dramatically.
Examples of fields that use DIP
1. Gamma Rays (10^4 to 10^3 eV)
Application: Gamma rays are primarily used in medical imaging (like PET scans), cancer
therapy (radiotherapy), and sterilization of medical equipment.
Dipole Interaction: Gamma rays can interact with the nucleus of atoms. Dipole interactions
are less common at these high energies, as they are more likely to cause ionization and
nuclear reactions rather than traditional dipole interactions.
2. X-rays (10^3 to 10^1 eV)
Application: X-rays are commonly used in medical imaging (such as CT scans and X-ray
radiography), security scanning, and materials analysis.
2
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
Dipole Interaction: In X-rays, dipole interactions happen through the photoelectric effect,
where X-ray photons eject electrons from atoms. X-rays cause dipole oscillations in atoms
that result in the emission of lower-energy photons.
3. Ultraviolet (UV) Radiation (10^1 to 10^2 eV)
Application: UV light is used in sterilization, water purification, and forensic analysis. It also
plays a role in the production of Vitamin D in the human body.
Dipole Interaction: UV radiation induces electronic transitions in molecules. This can excite
electrons, leading to fluorescence or photochemical reactions, which involve dipole
interactions in atoms and molecules.
4. Visible Light (~1 eV)
Application: Visible light is essential for vision, illumination, photography, and optical
communications.
Dipole Interaction: In this region, dipole interactions are common in light absorption and
emission processes. Molecules and atoms absorb light when the oscillating electric field
interacts with dipole moments, causing transitions between electronic energy levels.
5. Infrared Radiation (10^-1 to 10^-2 eV)
Application: Infrared is used in thermal imaging, night vision, remote controls, and
spectroscopy.
Dipole Interaction: Infrared radiation excites vibrational modes in molecules with dipole
moments. This interaction is essential for infrared spectroscopy, where absorption patterns
reveal molecular structures.
6. Microwaves (10^-2 to 10^-4 eV)
Application: Microwaves are used in communication (Wi-Fi, satellite, mobile phones), radar,
and microwave ovens.
Dipole Interaction: Microwaves cause dipole rotation in polar molecules (like water). The
alternating electric field of microwaves makes polar molecules rotate, producing heat in
applications like microwave cooking.
7. Radio Waves (10^-4 eV and lower)
Application: Radio waves are used in broadcasting, telecommunications, and radar systems.
Dipole Interaction: Radio waves interact with dipoles in antennas. The oscillating electric
field in radio waves induces current in the dipole antennas, which is the basis for radio
transmission and reception.
Fundamental steps in digital image processing
These methods can broadly be categorized into two groups:
1. Methods where both input and output are images: These include image acquisition,
enhancement, restoration, color processing, wavelet processing, compression, and
morphological processing. The goal of these methods is to improve, modify, or extract useful
visual information from the images.
3
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
2. Methods where inputs may be images, but outputs are attributes extracted from the
images: This includes tasks like segmentation, representation, description, and object
recognition. These methods analyze image data to extract meaningful features or perform
specific tasks, such as identifying objects within the images.
1. Image Acquisition:
o The process of obtaining an image, typically by capturing it via a camera or sensor.
This stage may include preprocessing such as scaling.
2. Image Enhancement:
o This is about manipulating an image to make it more suitable for a specific
application. It is subjective, as the method depends on the viewer’s judgment of the
result.
o Techniques are introduced with examples like enhancing X-ray or satellite images.
3. Image Restoration:
o Focuses on improving an image’s appearance based on objective mathematical or
probabilistic models of degradation.
4. Color Image Processing:
o Involves the use of color models to process and analyze color images. With the
growing use of digital images on the internet, color image processing has gained
importance.
5. Wavelets and Multiresolution Processing:
o Wavelets allow for image representation at various resolutions and are commonly
used for image compression and pyramidal representations.
6. Compression:
4
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
o Techniques to reduce the storage and bandwidth required for image transmission,
such as JPEG.
7. Morphological Processing:
o Focuses on extracting image components that are useful for shape representation and
description.
8. Segmentation:
o One of the most important and challenging tasks, segmentation involves partitioning
an image into constituent parts or objects. Successful segmentation is crucial for
subsequent recognition tasks.
9. Representation and Description:
o After segmentation, this stage involves converting raw pixel data into a form that can
be processed by a computer, focusing on boundary or region-based representation and
feature selection.
10. Object Recognition:
o Assigns labels to objects based on their attributes. This is the final step in many image
processing systems where specific objects are identified within the image.
Knowledge Base:
The knowledge base supports the operation of all modules, guiding the image processing tasks and
providing prior knowledge about the problem domain. This could range from simple details like
regions of interest in the image to complex interrelated data such as defect types in materials or
satellite images.
Components of an Image Processing System
5
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
6
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
7
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
The voltage signals from these sensors are then digitized, transforming the analog responses into
digital image data. This process forms the foundation for modern digital imaging in fields ranging
from photography to medical diagnostics and satellite imagery.
Image Acquisition Using a Single Sensor
Photodiode Sensor: A widely used sensor made from silicon that outputs a voltage waveform
proportional to the light it detects.
Filter for Selectivity: Filters, like a green filter, enhance the sensor's selectivity by emphasizing
specific wavelengths of light, resulting in a stronger response for that color.
2D Image Generation: To create 2D images using a single sensor, relative motion in both x- and y-
directions between the sensor and the area being imaged is required.
High-Precision Scanning: A common method involves mounting the object on a rotating drum for
one-dimensional motion, while the sensor moves perpendicularly, offering accurate but slow
scanning.
Laser-Based Imaging: Some systems use a laser combined with moving mirrors to scan and direct
reflected light back to the sensor, applicable with strip or array sensors for advanced imaging.
8
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
1. Sensor Strip Arrangement: A commonly used geometry involves a strip of sensors arranged
.This sensor strip provides imaging in one direction, while motion perpendicular to the strip
completes the second dimension, forming a 2D image.
2. Application in Flatbed Scanners: Flatbed scanners use in-line sensors to capture high-
resolution images. Devices can have 4000 or more sensors, with the strip capturing one image
line at a time, and the motion of the scanner completing the image in the perpendicular
direction.
3. Airborne Imaging: In applications such as airborne imaging, in-line sensors are mounted on
aircraft, capturing one line of an image as the plane moves over a geographical area. This
motion creates a 2D image of the terrain by responding to different electromagnetic bands.
4. Ring Configuration for 3D Imaging: In medical and industrial imaging, sensors are
arranged in a ring around the object .A rotating X-ray source illuminates the object, and
sensors collect the X-ray energy, producing cross-sectional (slice) images.
5. CAT Imaging and Processing: This setup forms the basis of computerized axial tomography
(CAT), which requires processing algorithms to transform sensor data into cross-sectional
images. Techniques like MRI and PET use similar principles but with different illumination
sources and sensors, resulting in 3D digital volumes of stacked images.
9
Koustav Biswas , Dept. Of CSE , DSATM
Digital Image Processing 21CS732
10
Koustav Biswas , Dept. Of CSE , DSATM