Digital Image Processing Unit 1 ppt
Digital Image Processing Unit 1 ppt
CS3EA14
Swati Vaidya
Unit I
Imaging, Digital Image Processing, Fundamental Steps in Image Processing, Components of Image Processing System,
Elements of Visual Perception, Structure of Human Eye, Image Sensing and Acquisition, Image Sampling and
Quantization.
Imaging
• Definition:
• Imaging is the process of capturing visual information using devices such as
cameras, scanners, and sensors.
• Types of Imaging:
• Optical Imaging: Capturing images using light (e.g., cameras).
• Thermal Imaging: Capturing images based on heat emission (e.g., thermal
cameras).
• X-ray Imaging: Using X-rays to capture images of internal structures (e.g.,
medical imaging).
• Applications:
• Medical Imaging, Remote Sensing, Industrial Inspection, etc.
What is an image?
2. BLACK AND WHITE IMAGE– The image which consist of only black
and white color is called BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format. It has 256
different shades of colors in it and commonly known as Grayscale Image. In
this format, 0 stands for Black, and 255 stands for white, and 127 stands for
gray.
The right side of this equation is digital image by definition. Every element of
this matrix is called image element , picture element , or pixel.
Digital Image Processing
• Definition:
• The field of digital image processing refers to processing digital images by means of
digital computer.
2. Image enhancement: It is among the simplest and most appealing areas of digital
image processing. The idea behind this is to bring out details that are obscured or simply
to highlight certain features of interest in image. Image enhancement is a very subjective
area of image processing. This involves improving the visual quality of an image, such
as increasing contrast, reducing noise, and removing artifacts.
3. Image restoration: It deals with improving the appearance of an image. It is
an objective approach, in the sense that restoration techniques tend to be based
on mathematical or probabilistic models of image processing. Enhancement, on
the other hand is based on human subjective preferences regarding what
constitutes a “good” enhancement result. This involves removing degradation
from an image, such as blurring, noise, and distortion.
4. Color image processing: It is an area that is been gaining importance because
of the use of digital images over the internet. Color image processing deals with
basically color models and their implementation in image processing
applications.
5. Wavelets and Multi resolution Processing: These are the foundation for
representing image in various degrees of resolution.
6. Compression: It deals with techniques reducing the storage required to save
an image, or the bandwidth required to transmit it over the network. It has to
major approaches a) Lossless Compression b) Lossy Compression
7. Morphological processing: It deals with tools for extracting image
components that are useful in the representation and description of shape and
boundary of objects. It is majorly used in automated inspection applications.
8. Image segmentation: This involves dividing an image into regions or segments, each of
which corresponds to a specific object or feature in the image.
9. Feature Extraction: It always follows the output of segmentation step that is, raw pixel
data, constituting either the boundary of an image or points in the region itself. This involves
representing an image in a way that can be analyzed and manipulated by a computer, and
describing the features of an image in a compact and meaningful way. Feature extraction
consists of feature detection and feature description. Feature detection refers to finding the
features in an image, region, or boundary. Feature description assigns quantitative attributes to
the detected features.
10. Image pattern classification : is the process that assigns a label (e.g., “vehicle”) to an
object based on its feature descriptors. It is the last step of image processing which use
artificial intelligence of software.
11. Knowledge base: Knowledge about a problem domain is coded into an image processing
system in the form of a knowledge base. This knowledge may be as simple as detailing
regions of an image where the information of the interest in known to be located. Thus
limiting search that has to be conducted is in seeking the information. The knowledge base
also can be quite complex such interrelated list of all major possible defects in a materials
inspection problems or an image database containing high resolution satellite images of a
region in connection with change detection application. The knowledge base also controls the
interaction between modules.
Components of a general-purpose image
processing system-
1. Image Sensors: With reference to sensing, two elements are required to acquire digital
image. The first is a physical device that is sensitive to the energy radiated by the object we
wish to image. The second, called a digitizer, is a device for converting the output of the
physical sensing device into digital form. Image sensors senses the intensity, amplitude, co-
ordinates and other features of the images and passes the result to the image processing
hardware. It includes the problem domain.
7. Hardcopy Device: Hardcopy devices for recording images include laser printers, film
cameras, heat sensitive devices, ink-jet units, and digital units, such as optical and CD-ROM
disks. Film provides the highest possible resolution, but paper is the obvious medium of choice
for written material. For presentations, images are displayed on film transparencies or in a
digital medium if image projection equipment is used. Image displays in use today are mainly
color TV monitors.
8. Networking and cloud: It is almost a default function in any computer system in use today.
Because of the large amount of data inherent in image processing applications, the key
consideration in image is transmission bandwidth. Network is the connection of all the above
elements of the image processing system.
Elements of Visual Perception
• The field of digital image processing is built on the foundation of mathematical and
probabilistic formulation, but human intuition and analysis play the main role to
make the selection between various techniques, and the choice or selection is
basically made on subjective, visual judgements.
• In human visual perception, the eyes act as the sensor or camera, neurons act as the
connecting cable and the brain acts as the processor.
• The basic elements of visual perceptions are:
• Structure of Eye
• Image Formation in the Eye
• Brightness Adaptation and Discrimination
STRUCTURE OF THE HUMAN EYE
Elements of Visual Perception
•Human Visual System:
• Eye: Captures light and converts it into neural signals.
• Optic Nerve: Transmits visual information from the
eye to the brain.
• Brain: Processes and interprets visual signals.
•Visual Perception:
• The process by which the brain interprets and makes
sense of visual information.
•Factors Affecting Visual Perception:
• Light, color, contrast, and spatial resolution.
Structure of Human Eye
•Anatomy:
• Cornea: Transparent front layer that refracts light.
• Pupil: Opening that regulates the amount of light
entering the eye.
• Lens: Focuses light onto the retina.
• Retina: Contains photoreceptor cells (rods and cones)
that detect light.
• Optic Nerve: Transmits visual information to the brain.
•Function:
• The eye captures light and converts it into electrical
signals that are interpreted by the brain to form images.
Structure of the Human Eye
The human eye is almost a sphere, about 20 mm in diameter, and is made up of three main
layers:
1. Outer Layer:
-Cornea: A clear, tough tissue that covers the front of the eye.
-Sclera: An opaque, white membrane that covers the rest of the eye.
The retina also has a "blind spot" where the optic nerve leaves the eye, as there are no light
receptors in that area. Cones are densest in the center of the fovea, while rods are more
densely packed a bit further out, before their density decreases toward the edges of the retina.
Scotopic vision refers to the eye's ability to see in low-light conditions, primarily using the
rods in the retina. Rods are highly sensitive to light but do not detect color, which is why
objects appear colorless in dim lighting.
Photopic vision refers to the eye's ability to see in bright-light conditions, primarily using the
cones in the retina. Cones are responsible for detecting color and fine details, making them
crucial for sharp and color vision during the day or in well-lit environments.
Image Formation in the Eye:
In a regular camera, the lens has a fixed focal length, meaning it doesn’t change. To focus
on objects at different distances, the camera adjusts by moving the lens closer to or
further from the film or sensor.
In the human eye, it's the opposite. The distance between the lens and the retina (the part
that senses the image) stays the same. To focus on objects at different distances, the eye
changes the shape of the lens. This is done by muscles in the ciliary body, which make
the lens flatter for distant objects and thicker for nearby ones.
The distance from the center of the eye's lens to the retina is about 17 mm. The focal
length of the lens varies from about 14 mm to 17 mm. When the eye is relaxed and
looking at something far away (more than 3 meters), the focal length is around 17 mm.
When the lens of the eye focus an image of the outside world onto a light-sensitive membrane in
the back of the eye, called retina the image is formed. The lens of the eye focuses light on the
photoreceptive cells of the retina which detects the photons of light and responds by producing
neural impulses.
If a person looks at a 15-meter-high tree from 100 meters away, the tree’s image on the retina
will have a height of 2.5 mm. This image is focused mainly on the fovea, the part of the retina
responsible for sharp vision. Light receptors in the retina convert this image into electrical
signals, which the brain then interprets.
Brightness Adaptation and Discrimination:
Digital images are displayed as a discrete set of intensities. The eyes ability to
discriminate black and white at different intensity levels is an important consideration
in presenting image processing result.
The range of light intensity levels to which the human visual system can adapt is of the order
of 1010 from the scotopic threshold to the glare limit. In a photopic vision, the range is about
106.
Image sensing and Acquisition
• The types of images in which we are interested are generated by the combination of an “illumination”
source and the reflection or absorption of energy from that source by the elements of the “scene” being
imaged.
• We enclose illumination and scene in quotes to emphasize the fact that they are considerably more general
than the familiar situation in which a visible light source illuminates a common everyday 3-D (three-
dimensional) scene.
• For example,
• The illumination may originate from a source of electromagnetic energy such as radar, infrared, or X-ray
energy.
• But, as noted earlier, it could originate from less traditional sources, such as ultrasound or even a
computer-generated illumination pattern. Similarly, the scene elements could be familiar objects, but they
can just as easily be molecules, buried rock formations, or a human brain.
• We could even image a source, such as acquiring images of the sun. Depending on the nature of the
source, illumination energy is reflected from, or transmitted through, objects. An example in the first
category is light reflected from a planar surface. An example in the second category is when X-rays pass
through a patient's body for the purpose of generating a diagnostic X-ray film.
In some applications, the reflected or
transmitted energy is focused onto a photo
converter (e.g., a phosphor screen), which
converts the energy into visible light. Electron
microscopy and some applications of gamma
imaging use this approach.
In this, the response of each sensor is proportional to the integral of the light energy projected onto the
surface of the sensor. Noise reduction is achieved by letting the sensor integrate the input light signal over
minutes or ever hours.
Advantage: Since sensor array is 2D, a complete image can be obtained by focusing the energy pattern onto
the surface of the array.
Sampling Quantization
Digitization of co-ordinate values. Digitization of amplitude values.
x-axis(time) – discretized. x-axis(time) – continuous.
y-axis(amplitude) – continuous. y-axis(amplitude) – discretized.
Sampling is done prior to the quantization Quantization is done after the sampling
process. process.
It determines the spatial resolution of the It determines the number of grey levels in
digitized images. the digitized images.
It reduces c.c. to a series of tent poles over It reduces c.c. to a continuous series of stair
a time. steps.
A single amplitude value is selected from Values representing the time intervals are
different values of the time interval to rounded off to create a defined set of
represent it. possible amplitude values.
Image Sampling and Quantization
•Sampling: Converting continuous images into discrete
pixel values.
•Quantization: Assigning a finite number of levels to the
sampled values.
•Resolution: The level of detail an image holds.
•Bit Depth: The number of bits used to represent each
pixel.
Image Sampling and Quantization
•Image Sampling:
• The process of converting a continuous image into a discrete form
by measuring its intensity at regular intervals (pixels).
• Spatial Resolution: Determined by the number of pixels in the
image.
•Image Quantization:
• The process of mapping the continuous range of pixel values to a
finite range of discrete levels.
• Bit Depth: Number of bits used to represent each pixel (e.g., 8-bit,
16-bit).
•Example:
• Sampling: Higher sampling rate increases image resolution.
• Quantization: More levels of quantization increase the accuracy of
intensity representation.
Example of Image Sampling and Quantization
•Sampled Image:
• Show the same image after sampling (e.g., pixel grid).
•Quantized Image:
• Show the same image after quantization (e.g., reduced bit
depth).
Thank You……………………