0% found this document useful (0 votes)
11 views29 pages

CV&IP Chapter Two

Chapter Two of the document covers digital image fundamentals, including concepts of image representation, acquisition processes, and mathematical tools used in image processing. It discusses the importance of pixels, resolution, and different image types such as raster and vector images. Additionally, it outlines the steps involved in digital image acquisition, including sensing, optics, and analog to digital conversion.

Uploaded by

tesfayekelem84
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views29 pages

CV&IP Chapter Two

Chapter Two of the document covers digital image fundamentals, including concepts of image representation, acquisition processes, and mathematical tools used in image processing. It discusses the importance of pixels, resolution, and different image types such as raster and vector images. Additionally, it outlines the steps involved in digital image acquisition, including sensing, optics, and analog to digital conversion.

Uploaded by

tesfayekelem84
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Department of Computer Science

Digital image fundamentals

Chapter Two
Topic Coverage
Basic concept of image

Digital image representation


Chapter 2:
Digital image fundamentals Digital image acquisition process

Image sampling and quantization

Representation of different image type‘s

Mathematical tools used in digital image


processing
Basic concept of image
The basic concept of an image refers to a visual representation or depiction of an object, scene, or
phenomenon.
Digital signal processing, and computer vision, an image is typically a two-dimensional array of
pixels, each pixel representing a tiny unit of information.
An image is defined as a two-dimensional function, F(X, Y), where x and y are spatial
coordinates, and the amplitude of F at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point.
When x, y, and the intensity values of f are all finite, discrete quantities, we call the image a
digital image.
The field of digital image processing refers to processing digital images by means of a digital
computer.
Cont...
A pixel (picture element) is the smallest unit in an image. It contains information about color,
intensity, or other visual attributes.

Resolution refers to the number of pixels in an image. Higher resolution images contain more
pixels and can represent finer details.

In grayscale images, each pixel's brightness is represented by a single value on a scale from 0 to
255 (for 8-bit images).

A value of 0 represents black (no brightness), while 255 represents white (maximum brightness).
Values in between represent varying shades of gray.

In color images represented in RGB (Red, Green, Blue) format, brightness can be measured by
combining the intensity values of the three color channels.
Cont...
When you change the resolution of an image, you are saying how many pixels you want to live in
each inch of the image.
For example, an image that has a resolution of 600 ppi will contain 600 pixels within each in of
the image.
600 is a lot of pixels to live in just one inch, which is why 600ppi images will look very crisp and
detailed.
Now, compare that to an image with 72ppi, which has a lot fewer pixels per inch.

As you've probably guessed, it won't look nearly as sharp as the 600ppi image.

Resolution rule of thumb: When scanning or photographing, always try and capture the image at
the largest resolution/quality.
Cont...
Digital image representation
Digital images are represented using a discrete set of values to capture the visual information.
The most common representation is through a grid of picture elements, known as pixels.

Each pixel corresponds to a tiny unit of the image and contains information about its color,
intensity, or other visual attributes.

An image is divided into a grid of pixels. The arrangement of pixels forms the spatial structure of
the image.
Cont...
Assume that an image f(x, y) is sampled so that the resulting digital image has M rows and N
columns.
The values of the coordinates (x, y) now become discrete quantities.

If we consider integer values for these discrete


coordinates, then it becomes a digital
representation. Thus, the values of the
coordinates at the origin are (x, y) = (0, 0).
The next coordinate values along the first row
of the image are represented as (x, y) = (0, 1).
The following figure shows the coordinate
convention used Figure : General representation of an image pixels
Cont...
What do these numbers represent?
Cont...
image representation
Cont...
Digital image acquisition process
Digital image acquisition is the process of capturing visual information from the real world and
converting it into a digital format that can be stored and processed by a computer.
The process involves various steps and technologies, and it is fundamental to fields such as
photography, computer vision, medical imaging, and remote sensing.
1. Sensing:
The process begins with a sensor that captures light or other forms of electromagnetic radiation
from the scene. The type of sensor depends on the application:
Photographic Sensors: Used in digital cameras, these sensors capture light and convert it into
electrical signals.
Infrared Sensors: Used in applications like night vision or remote sensing, these sensors capture
infrared radiation.
Medical Imaging Sensors: X-ray, CT, MRI, and ultrasound sensors capture different types of
radiation or sound waves for medical imaging.
Cont...
2. Optics:
Lenses and optics are often used to focus and direct light onto the sensor.
The quality of the optics can impact the clarity and quality of the captured image.
3. Analog to Digital Conversion (ADC):
The analog signals produced by the sensor need to be converted into a digital format for
processing by a computer.
This is done through an Analog-to-Digital Converter (ADC).
4. Sampling:
The continuous analog signal is sampled at discrete points to represent the image digitally.
The resolution of the digital image is determined by the number of samples taken.
Cont...
5. Quantization:
The amplitude values of each sample are quantized, meaning they are assigned digital values.
Higher bit depth allows for a greater range of values, providing more color or intensity
levels.
6. Image Storage:
The digital image is stored in a file format, such as JPEG, PNG, TIFF, or RAW. The choice of
format depends on factors like compression requirements, quality, and the intended use of the
image.
7. Color Representation:
For color images, color information is represented using color models such as RGB (Red,
Green, Blue) or CMYK (Cyan, Magenta, Yellow, Black). Each pixel is assigned values for the
color channels.
8. Metadata: Cont...
Additional information, such as camera settings, date, and geolocation, may be embedded in the
image file as metadata.

image acquisition
Cont...
Image sampling and quantization

The output of most sensors is a continuous


voltage waveform whose amplitude and spatial
behavior are related to the physical
phenomenon being sensed.
To create a digital image, we need to convert
the continuous sensed data into digital form.
This involves two processes: sampling and
quantization.
The basic idea behind sampling and
quantization is illustrated in following figure.
Cont...
An image may be continuous with respect to the x and y- coordinates, and also in amplitude.
To convert it to digital form, we have to sample the function in both coordinates and in amplitude.
1. Digitizing the coordinate values is called sampling.
2. Digitizing the amplitude(brightness) values is called quantization.

Image sampling is the process of converting a


continuous-tone image into a discrete set of
samples or pixels.

In a continuous-tone image, the intensity varies


smoothly across the scene, but for digital
representation, this needs to be discretized.

During quantization, some information may be


lost, leading to quantization errors.
Cont...
Representation of different image type‘s
Images can come in various types and formats, each with its own characteristics and use cases.
Some common image representation types:
1.Raster Images (Bitmap Images): Composed of a grid of pixels, where each pixel contains
information about its color and brightness.
Examples include JPEG, PNG, GIF, and TIFF.
2. Vector Images: Created using mathematical equations to define shapes, lines, and colors,
allowing for infinite scalability without loss of quality.
Examples include SVG, AI, and EPS.
3. RAW Images: Uncompressed and unprocessed files captured directly from digital cameras,
containing all data captured by the camera's sensor. Examples include proprietary formats like
CR2 (Canon) and NEF (Nikon).
Cont...
4. DICOM Images: Used in medical imaging to store X-rays, MRIs, and CT scans, containing
image data along with metadata such as patient information and imaging parameters.
5. Depth Maps: Grayscale images representing the depth or distance of objects in a scene,
commonly used in computer graphics for 3D rendering and depth-based effects.
6. Thumbnail Images: Small, low-resolution versions of larger images used for quick previews
or thumbnails in applications and websites.
7. Mosaic Images: Composed of smaller images (tiles) arranged to form a larger image, often
used for artistic or decorative purposes.
8. Icon Images: Small, simplified images used to represent applications, files, or functions,
commonly seen in computer interfaces and websites.
Cont...
9. Binary Images:
Representation: Binary images have only two possible pixel values (0 or 1) to point
(background and object).
Characteristics: Simplest form of image representation.
Examples: Used in thresholding operations.
Use Cases: Simple object detection.
A binary image is referred to as a 1 bit/pixel image because it takes only 1 binary digit to
represent each pixel.
Mathematical tools used in digital image processing
Mathematical tools play a crucial role in digital image processing, providing the foundation for
various operations and algorithms.
Linear Algebra:
Matrix Operations: Images are often represented as matrices, and linear algebra operations such
as matrix addition, subtraction, multiplication, and inversion are fundamental.
Calculus:
Derivatives and Gradients: Used in edge detection and feature extraction.
Integration: Applied in areas such as image smoothing and reconstruction.
Statistics:
Mean and Standard Deviation: Basic statistical measures used in image analysis.
Histogram Analysis: Understanding pixel intensity distributions in an image.
Cont...
Geometry:
Transformation Matrices: Translation, rotation, scaling, and shearing operations are expressed
using matrices.
Projective Geometry: Used in perspective transformations.
Signal Processing:
Fourier Transform: Used for frequency domain analysis, filtering, and compression.
Wavelet Transform: Useful for multi-resolution analysis and compression.
Partial Differential Equations (PDEs):
Heat Equation: Used in image smoothing and diffusion processes.
Laplace Equation: Applied in edge detection and image sharpening.
Cont...
Probability Theory:
Bayesian Methods: Used in image segmentation and classification.
Markov Random Fields (MRFs): Modeling spatial dependencies in images.
Numerical Methods:
Interpolation and Approximation: Used for resizing and reconstruction.
Optimization Techniques: Applied in image registration and parameter estimation.
Graph Theory:
Graph-based Segmentation: Representing pixels as nodes and edges as relationships.
Connected Component Analysis: Identifying connected regions in an image.
Cont...
Group Theory:
Symmetry Operations: Studied in the context of pattern recognition and shape analysis.
Morphological Operations:
Dilation and Erosion: Used in morphological image processing for shape analysis.
Information Theory:
Entropy: Measures uncertainty and is used in image compression algorithms.
Machine Learning:
Classification Algorithms: Applied in image recognition tasks.
Clustering Algorithms: Used in segmentation.
Cont...
Differential Equations:
Diffusion Equations: Applied in image smoothing and denoising.
Algebraic Geometry:
Algebraic Curves and Surfaces: Studied in the context of object recognition and shape analysis.
n d! ?
E o n?
e st i
Q u
Quiz(5%)+1.5
1.What are the definitions of CV and IP, and how do they differ?(2Pt)
2.Explain the concepts of Sampling and Quantization.(2Pt)
3.Define Raster Images and Binary Images.(1Pt)

You might also like