0% found this document useful (0 votes)
9 views

Digital Image Processing Unit 1 ppt

The document provides an overview of digital image processing, detailing its fundamental concepts, types of images, and the steps involved in processing digital images. It discusses the components of an image processing system, the structure of the human eye, and how visual perception influences image interpretation. Additionally, it outlines various applications and techniques used in digital image processing, including image acquisition, enhancement, restoration, and segmentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Digital Image Processing Unit 1 ppt

The document provides an overview of digital image processing, detailing its fundamental concepts, types of images, and the steps involved in processing digital images. It discusses the components of an image processing system, the structure of the human eye, and how visual perception influences image interpretation. Additionally, it outlines various applications and techniques used in digital image processing, including image acquisition, enhancement, restoration, and segmentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Digital Image Processing

CS3EA14
Swati Vaidya

Unit I
Imaging, Digital Image Processing, Fundamental Steps in Image Processing, Components of Image Processing System,
Elements of Visual Perception, Structure of Human Eye, Image Sensing and Acquisition, Image Sampling and
Quantization.
Imaging
• Definition:
• Imaging is the process of capturing visual information using devices such as
cameras, scanners, and sensors.
• Types of Imaging:
• Optical Imaging: Capturing images using light (e.g., cameras).
• Thermal Imaging: Capturing images based on heat emission (e.g., thermal
cameras).
• X-ray Imaging: Using X-rays to capture images of internal structures (e.g.,
medical imaging).
• Applications:
• Medical Imaging, Remote Sensing, Industrial Inspection, etc.
What is an image?

“An image is defined as a two-dimensional function,F(x,y), where x and y are


spatial coordinates, and the amplitude of F at any pair of coordinates (x,y) is
called the intensity or gray level of that image at that point. When x,y, and
intensity values of F are all finite, we call it a digital image.”

• In other words, an image can be defined by a two-dimensional array specifically


arranged in rows and columns.
• Digital Image is composed of a finite number of elements, each of which
elements have a particular value at a particular location.
• These elements are referred to as picture elements, image elements, and pixels.
• A Pixel is most widely used to denote the elements of a Digital Image.
Types of an image-
1. BINARY IMAGE– The binary image as its name suggests, contain only two
pixel elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This
image is also known as Monochrome.

2. BLACK AND WHITE IMAGE– The image which consist of only black
and white color is called BLACK AND WHITE IMAGE.

3. 8 bit COLOR FORMAT– It is the most famous image format. It has 256
different shades of colors in it and commonly known as Grayscale Image. In
this format, 0 stands for Black, and 255 stands for white, and 127 stands for
gray.

4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different


colors in it. It is also known as High Color Format. In this format the
distribution of color is not as same as Grayscale image.
A 16 bit format is actually divided into three further formats which are Red,
Green and Blue. That famous RGB format.
Image as a Matrix-
As we know, images are represented in rows and columns we have the
following syntax in which images are represented:

The right side of this equation is digital image by definition. Every element of
this matrix is called image element , picture element , or pixel.
Digital Image Processing
• Definition:
• The field of digital image processing refers to processing digital images by means of
digital computer.

• Image processing mainly include the following steps:


• 1.Importing the image via image acquisition tools;
2.Analysing and manipulating the image;
3.Output in which result can be altered image or a report which is based on analysing that
image.
• Goals:
• Enhance image quality, extract meaningful information, and facilitate image
interpretation.
• Applications:
• Medical Diagnosis, Satellite Image Analysis, Object Recognition, etc.
• Digital Image Processing means processing digital image by means of a
digital computer. We can also say that it is a use of computer algorithms, in
order to get enhanced image either to extract some useful information.

• Digital image processing is the use of algorithms and mathematical models


to process and analyze digital images. The goal of digital image processing
is to enhance the quality of images, extract meaningful information from
images, and automate image-based tasks.
The Origins of Digital Image Processing
• One of the earliest applications of digital images was in the newspaper
industry, when pictures were first sent by submarine cable between
London and New York.
• Introduction of the Bartlane cable picture transmission system in the early
1920s reduced the time required to transport a picture across the Atlantic
from more than a week to less than three hours.
Fundamental steps in digital image
processing-
The basic steps involved in digital image processing are:

1.Image acquisition: It could be as simple as being given an image that is already in


digital form. Generally the image acquisition stage involves preprocessing such scaling.
This involves capturing an image using a digital camera or scanner, or importing an
existing image into a computer.

2. Image enhancement: It is among the simplest and most appealing areas of digital
image processing. The idea behind this is to bring out details that are obscured or simply
to highlight certain features of interest in image. Image enhancement is a very subjective
area of image processing. This involves improving the visual quality of an image, such
as increasing contrast, reducing noise, and removing artifacts.
3. Image restoration: It deals with improving the appearance of an image. It is
an objective approach, in the sense that restoration techniques tend to be based
on mathematical or probabilistic models of image processing. Enhancement, on
the other hand is based on human subjective preferences regarding what
constitutes a “good” enhancement result. This involves removing degradation
from an image, such as blurring, noise, and distortion.
4. Color image processing: It is an area that is been gaining importance because
of the use of digital images over the internet. Color image processing deals with
basically color models and their implementation in image processing
applications.
5. Wavelets and Multi resolution Processing: These are the foundation for
representing image in various degrees of resolution.
6. Compression: It deals with techniques reducing the storage required to save
an image, or the bandwidth required to transmit it over the network. It has to
major approaches a) Lossless Compression b) Lossy Compression
7. Morphological processing: It deals with tools for extracting image
components that are useful in the representation and description of shape and
boundary of objects. It is majorly used in automated inspection applications.
8. Image segmentation: This involves dividing an image into regions or segments, each of
which corresponds to a specific object or feature in the image.

9. Feature Extraction: It always follows the output of segmentation step that is, raw pixel
data, constituting either the boundary of an image or points in the region itself. This involves
representing an image in a way that can be analyzed and manipulated by a computer, and
describing the features of an image in a compact and meaningful way. Feature extraction
consists of feature detection and feature description. Feature detection refers to finding the
features in an image, region, or boundary. Feature description assigns quantitative attributes to
the detected features.
10. Image pattern classification : is the process that assigns a label (e.g., “vehicle”) to an
object based on its feature descriptors. It is the last step of image processing which use
artificial intelligence of software.
11. Knowledge base: Knowledge about a problem domain is coded into an image processing
system in the form of a knowledge base. This knowledge may be as simple as detailing
regions of an image where the information of the interest in known to be located. Thus
limiting search that has to be conducted is in seeking the information. The knowledge base
also can be quite complex such interrelated list of all major possible defects in a materials
inspection problems or an image database containing high resolution satellite images of a
region in connection with change detection application. The knowledge base also controls the
interaction between modules.
Components of a general-purpose image
processing system-
1. Image Sensors: With reference to sensing, two elements are required to acquire digital
image. The first is a physical device that is sensitive to the energy radiated by the object we
wish to image. The second, called a digitizer, is a device for converting the output of the
physical sensing device into digital form. Image sensors senses the intensity, amplitude, co-
ordinates and other features of the images and passes the result to the image processing
hardware. It includes the problem domain.

2. Specialized Image Processing Hardware: Specialized image processing hardware


usually consists of the digitizer and hardware that performs other primitive operations,
such as an arithmetic logic unit (ALU), that performs arithmetic and logical operations in
parallel on entire images. Image processing hardware is the dedicated hardware that is used
to process the instructions obtained from the image sensors. It passes the result to general
purpose computer.

3. Computer: It is a general purpose computer and can range from a PC to a


supercomputer depending on the application. Computer used in the image processing
system is the general purpose computer that is used by us in our daily life.
4. Image Processing Software: It consists of specialized modules that perform specific tasks. A
well-designed package also includes capability for the user to write code, as a minimum, utilizes
the specialized module. More sophisticated software packages allow the integration of these
modules. Image processing software is the software that includes all the mechanisms and
algorithms that are used in image processing system. Famous Ex is MATLAB Image Processing
Toolbox.
5. Mass Storage: This capability is a must in image processing applications. An image of size
1024 x1024 pixels, in which the intensity of each pixel is an 8- bit quantity requires one
Megabytes of storage space if the image is not compressed.
Image processing applications falls into three principal categories of storage
i) Short term storage for use during processing
ii) On line storage for relatively fast retrieval
iii) Archival storage characterized by infrequent access., such as magnetic tapes and disks
Mass storage stores the pixels of the images during the processing.
Storage is measured in bytes (eight bits), Kbytes (10 3 bytes), Mbytes (106 bytes), Gbytes (109
bytes), and Tbytes (1012 bytes).
6. Image Display: Image displays in use today are mainly color TV monitors. These monitors
are driven by the outputs of image and graphics displays cards that are an integral part of
computer system. It includes the monitor or display screen that displays the processed images.

7. Hardcopy Device: Hardcopy devices for recording images include laser printers, film
cameras, heat sensitive devices, ink-jet units, and digital units, such as optical and CD-ROM
disks. Film provides the highest possible resolution, but paper is the obvious medium of choice
for written material. For presentations, images are displayed on film transparencies or in a
digital medium if image projection equipment is used. Image displays in use today are mainly
color TV monitors.

8. Networking and cloud: It is almost a default function in any computer system in use today.
Because of the large amount of data inherent in image processing applications, the key
consideration in image is transmission bandwidth. Network is the connection of all the above
elements of the image processing system.
Elements of Visual Perception
• The field of digital image processing is built on the foundation of mathematical and
probabilistic formulation, but human intuition and analysis play the main role to
make the selection between various techniques, and the choice or selection is
basically made on subjective, visual judgements.

• In human visual perception, the eyes act as the sensor or camera, neurons act as the
connecting cable and the brain acts as the processor.
• The basic elements of visual perceptions are:
• Structure of Eye
• Image Formation in the Eye
• Brightness Adaptation and Discrimination
STRUCTURE OF THE HUMAN EYE
Elements of Visual Perception
•Human Visual System:
• Eye: Captures light and converts it into neural signals.
• Optic Nerve: Transmits visual information from the
eye to the brain.
• Brain: Processes and interprets visual signals.
•Visual Perception:
• The process by which the brain interprets and makes
sense of visual information.
•Factors Affecting Visual Perception:
• Light, color, contrast, and spatial resolution.
Structure of Human Eye
•Anatomy:
• Cornea: Transparent front layer that refracts light.
• Pupil: Opening that regulates the amount of light
entering the eye.
• Lens: Focuses light onto the retina.
• Retina: Contains photoreceptor cells (rods and cones)
that detect light.
• Optic Nerve: Transmits visual information to the brain.
•Function:
• The eye captures light and converts it into electrical
signals that are interpreted by the brain to form images.
Structure of the Human Eye
The human eye is almost a sphere, about 20 mm in diameter, and is made up of three main
layers:

1. Outer Layer:
-Cornea: A clear, tough tissue that covers the front of the eye.
-Sclera: An opaque, white membrane that covers the rest of the eye.

2. Middle Layer (Choroid):


- The choroid is a layer with blood vessels that nourish the eye. It’s heavily pigmented to
reduce stray light.
- The choroid extends into the ciliary body and iris at the front of the eye. The iris controls the
amount of light entering the eye through the pupil, which can vary in size from 2 to 8 mm.

3. Inner Layer (Retina):


- The retina lines the inside of the back of the eye and is where images are focused.
- It contains cones (about 6-7 million) that are concentrated in the center area called the fovea
and are responsible for seeing colors and fine details in bright light.
- Rods (about 75-150 million) are spread across the retina and help us see in dim light, though
they do not detect color. Rods provide a general view of the scene but with less detail.
The lens of the eye, located behind the iris, helps focus light onto the retina. It is mostly made
of water and protein and has a slight yellow tint that deepens with age. The lens can develop
cataracts, which cloud the lens and impair vision.

The retina also has a "blind spot" where the optic nerve leaves the eye, as there are no light
receptors in that area. Cones are densest in the center of the fovea, while rods are more
densely packed a bit further out, before their density decreases toward the edges of the retina.

Scotopic vision refers to the eye's ability to see in low-light conditions, primarily using the
rods in the retina. Rods are highly sensitive to light but do not detect color, which is why
objects appear colorless in dim lighting.

Photopic vision refers to the eye's ability to see in bright-light conditions, primarily using the
cones in the retina. Cones are responsible for detecting color and fine details, making them
crucial for sharp and color vision during the day or in well-lit environments.
Image Formation in the Eye:

In a regular camera, the lens has a fixed focal length, meaning it doesn’t change. To focus
on objects at different distances, the camera adjusts by moving the lens closer to or
further from the film or sensor.

In the human eye, it's the opposite. The distance between the lens and the retina (the part
that senses the image) stays the same. To focus on objects at different distances, the eye
changes the shape of the lens. This is done by muscles in the ciliary body, which make
the lens flatter for distant objects and thicker for nearby ones.

The distance from the center of the eye's lens to the retina is about 17 mm. The focal
length of the lens varies from about 14 mm to 17 mm. When the eye is relaxed and
looking at something far away (more than 3 meters), the focal length is around 17 mm.
When the lens of the eye focus an image of the outside world onto a light-sensitive membrane in
the back of the eye, called retina the image is formed. The lens of the eye focuses light on the
photoreceptive cells of the retina which detects the photons of light and responds by producing
neural impulses.
If a person looks at a 15-meter-high tree from 100 meters away, the tree’s image on the retina
will have a height of 2.5 mm. This image is focused mainly on the fovea, the part of the retina
responsible for sharp vision. Light receptors in the retina convert this image into electrical
signals, which the brain then interprets.
Brightness Adaptation and Discrimination:
Digital images are displayed as a discrete set of intensities. The eyes ability to
discriminate black and white at different intensity levels is an important consideration
in presenting image processing result.

The range of light intensity levels to which the human visual system can adapt is of the order
of 1010 from the scotopic threshold to the glare limit. In a photopic vision, the range is about
106.
Image sensing and Acquisition
• The types of images in which we are interested are generated by the combination of an “illumination”
source and the reflection or absorption of energy from that source by the elements of the “scene” being
imaged.
• We enclose illumination and scene in quotes to emphasize the fact that they are considerably more general
than the familiar situation in which a visible light source illuminates a common everyday 3-D (three-
dimensional) scene.
• For example,
• The illumination may originate from a source of electromagnetic energy such as radar, infrared, or X-ray
energy.
• But, as noted earlier, it could originate from less traditional sources, such as ultrasound or even a
computer-generated illumination pattern. Similarly, the scene elements could be familiar objects, but they
can just as easily be molecules, buried rock formations, or a human brain.
• We could even image a source, such as acquiring images of the sun. Depending on the nature of the
source, illumination energy is reflected from, or transmitted through, objects. An example in the first
category is light reflected from a planar surface. An example in the second category is when X-rays pass
through a patient's body for the purpose of generating a diagnostic X-ray film.
In some applications, the reflected or
transmitted energy is focused onto a photo
converter (e.g., a phosphor screen), which
converts the energy into visible light. Electron
microscopy and some applications of gamma
imaging use this approach.

The idea is simple: Incoming energy is


transformed into a voltage by the combination
of input electrical power and sensor material
that is responsive to the particular type of
energy being detected.

The output voltage waveform is the response


of the sensor(s), and a digital quantity is
obtained from each sensor by digitizing its
response.
Image Acquisition using a single sensor:
Example of a single sensor is a photodiode. Now to obtain a two-dimensional image
using a single sensor, the motion should be in both x and y directions.

Rotation provides motion in one direction.


Linear motion provides motion in the perpendicular direction.
Image Acquisition using a line sensor (sensor strips):
The sensor strip provides imaging in one direction.
Motion perpendicular to the strip provides imaging in other direction.
Image Acquisition using an array sensor:
In this, individual sensors are arranged in the form of a 2-D array. This type of arrangement is found in
digital cameras. e.g. CCD array

In this, the response of each sensor is proportional to the integral of the light energy projected onto the
surface of the sensor. Noise reduction is achieved by letting the sensor integrate the input light signal over
minutes or ever hours.

Advantage: Since sensor array is 2D, a complete image can be obtained by focusing the energy pattern onto
the surface of the array.

The sensor array is coincident with the focal plane, it


produces an output proportional to the integral of light
received at each sensor.

Digital and analog circuitry sweep these outputs and


convert them to a video signal which is then digitized
by another section of the imaging system. The output
is a digital image.
Image Sampling and Quantization

• To create a digital image, we need to convert


the continuous sensed data into digital form.
• This involves two processes: sampling and
quantization.
• A continuous image, f(x, y), that we want to
convert to digital form.
• An image may be continuous with respect
to the x- and y-coordinates, and also in
amplitude.
• To convert it to digital form, we have to
sample the function in both coordinates and
in amplitude.
• Digitizing the coordinate values is
called sampling.
• Digitizing the amplitude values is called
quantization.
 Difference between Image Sampling and Quantization
To create a digital image, we need to convert the continuous sensed data into digital form.
This process includes 2 processes:
Sampling: Digitizing the co-ordinate value is called sampling.
Quantization: Digitizing the amplitude value is called quantization.
To convert a continuous image f(x, y) into digital form, we have to sample the function in both
co-ordinates and amplitude.
UNIFORM SAMPLING:
Digitizing the coordinate values with equal spacing is called uniform sampling.
NON UNIFORM SAMPLING:
Digitizing the coordinate values with non uniform spacing is called non uniform sampling.
There are two types:
Fine sampling: it is required in the neighborhood of sharp gray level transitions.
Coarse sampling: it is utilized in relatively smooth regions.
Eg: consider a simple image consisting of face superimposed on a uniform background. Clearly
the background carries little detailed information and it can be represented by coarse sampling.
The face contains considerably more details and it is represented by fine sampling.
Difference between Image Sampling and Quantization:

Sampling Quantization
Digitization of co-ordinate values. Digitization of amplitude values.
x-axis(time) – discretized. x-axis(time) – continuous.
y-axis(amplitude) – continuous. y-axis(amplitude) – discretized.

Sampling is done prior to the quantization Quantization is done after the sampling
process. process.

It determines the spatial resolution of the It determines the number of grey levels in
digitized images. the digitized images.

It reduces c.c. to a series of tent poles over It reduces c.c. to a continuous series of stair
a time. steps.

A single amplitude value is selected from Values representing the time intervals are
different values of the time interval to rounded off to create a defined set of
represent it. possible amplitude values.
Image Sampling and Quantization
•Sampling: Converting continuous images into discrete
pixel values.
•Quantization: Assigning a finite number of levels to the
sampled values.
•Resolution: The level of detail an image holds.
•Bit Depth: The number of bits used to represent each
pixel.
Image Sampling and Quantization
•Image Sampling:
• The process of converting a continuous image into a discrete form
by measuring its intensity at regular intervals (pixels).
• Spatial Resolution: Determined by the number of pixels in the
image.
•Image Quantization:
• The process of mapping the continuous range of pixel values to a
finite range of discrete levels.
• Bit Depth: Number of bits used to represent each pixel (e.g., 8-bit,
16-bit).
•Example:
• Sampling: Higher sampling rate increases image resolution.
• Quantization: More levels of quantization increase the accuracy of
intensity representation.
Example of Image Sampling and Quantization

•Original Continuous Image:


• Show an example of a continuous image (e.g., a grayscale
gradient).

•Sampled Image:
• Show the same image after sampling (e.g., pixel grid).

•Quantized Image:
• Show the same image after quantization (e.g., reduced bit
depth).
Thank You……………………

You might also like