0% found this document useful (0 votes)
7 views

Computer Vision and Image Processing (updated) (2)

The document outlines a course on Computer Vision and Image Processing, detailing the units covered, required texts, evaluation methods, and tools used. It emphasizes the importance of computer vision in interpreting visual data and the role of image processing in enhancing image quality. Various applications of computer vision and image processing are discussed, including medical imaging, surveillance, and augmented reality.

Uploaded by

Betelhem
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Computer Vision and Image Processing (updated) (2)

The document outlines a course on Computer Vision and Image Processing, detailing the units covered, required texts, evaluation methods, and tools used. It emphasizes the importance of computer vision in interpreting visual data and the role of image processing in enhancing image quality. Various applications of computer vision and image processing are discussed, including medical imaging, surveillance, and augmented reality.

Uploaded by

Betelhem
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 165

Computer vision

&
image processing
01/20/2025 1
Outlines
Unit 1: Introduction to computer vision and image processing (4 hrs)
Unit 2: Digital image fundamentals (4 hrs)
Unit 3: Spatial Domain Image Processing (4 hrs)
Unit 4: Frequency Domain Image Processing (4 hrs)
Unit 5: Image Restoration and Reconstruction (4 hrs)
Unit 6:Image Compression (4 hrs)
Unit 7: Color Image Processing (4 hrs)
Unit 8: Object Recognition (4 hrs)

01/20/2025 2
Course description

• Methods in IP and CV, with an emphasis on the state-of-the-art


techniques

01/20/2025 3
Required text book
• Computer Vision: Algorithms and Applications by Richard Szeliski

01/20/2025 4
Instructor Contact
• Name: Endeshaw Admassie
• Office Location: Building 43, Room 114
• Office hours: by appointment
• Email: [email protected]

01/20/2025 5
Evaluation and grading
• Assignment (15 %)
• Midterm (20%)
• Lab(15%) (subject to change)
• Final (50)

01/20/2025 6
Tools and environment
• Matlab
• Python(OpenCV)

01/20/2025 7
CV Motto
•‘If We Want Machines to Think,
We Need to Teach Them to See’.

01/20/2025 8
Chapter 1: What is CV?
• Computer vision is a field of (AI) and computer science that focuses on
enabling computers to gain a visual understanding of the world.
• a field of study that seeks to develop techniques to help computers “see”
and understand the content of digital images such as photographs and
videos.
• Also called
• Image understanding
• Image analysis
• Machine vision
* The phenomenon that makes machines such as computers or mobile phones see the
surroundings is known as Computer Vision.

01/20/2025 9
You guys can see, right?
• Computers “want” to see as well 
• Well, CV is all about developing algorithms to allow computers to
“see”.
• Make computers understand images and videos
• Automate tasks that the human visual system can do.
• Enable computers gain high-level understanding from digital images
or videos
• Goal of computer Vision is to write computer programs that can
interpret images.
• Vision is NOT image processing
• Seeing is not the same as measuring the properties of the image.

01/20/2025 10
How do machines see?

• Represent colors with numbers


• Image segmentation
• Finding corners
• Finding textures
• Make a guess
• Finally, see the bigger picture

01/20/2025 11
Why study CV & IP?
• CV primarily deals with the interpretation and understanding of visual data,
such as images and videos, by enabling computers to extract meaningful
information from them, On the other hand, IP focuses on manipulating and
enhancing images to improve their quality, extract specific information, or
prepare them for further analysis
• Images, videos, movies are everywhere
• And you know that: an image is worth 1000 words!
• Images and movies have become ubiquitous in both production and
consumption
• Apps to manipulate images and movies are becoming core to extract info from
imagery
• Surveillance
• Building 3D representations
• Motion Capture assisted
01/20/2025 12
Contd.
• The physics of imaging
• Camera
• What a camera does
• How to tell where the camera was
• Light
Understanding the effects of light in images is crucial for photographers,
computer vision algorithms, and image processing techniques.
• How to measure light
• What light does at surfaces
• How the brightness values we see in cameras are determined
• Color
• The underlying mechanisms of color
• How to describe it and measure it
• Texture, edges
• Geometry
01/20/2025 13
Alas! We’re awash in images!

01/20/2025 14
Applications areas of CV and IP:
Space exploration

01/20/2025 15
Medical
• MRI: stands for magnetic resonance imaging, and it is a kind of scan that
can produce detailed pictures of parts of the body, including the brain.
• It is an imaging technology that produces 3D’nal detailed anatomical
images.
• Disease detection
• Diagnosis
• Treatment monitoring
• Autonomous Vehicles, Surveillance & Security
• AR
• Robotics
• Biometrics & Security
• HCI
• Environmental Monitoring…

01/20/2025 16
Vision-based biometrics

“How the Afghan Girl was Identified by Her Iris Patterns”

01/20/2025 17
Optical character recognition (OCR)
Technology to convert scanned docs to text
• If you have a scanner, it probably came with OCR software

License plate readers

01/20/2025 18
Application: Panoramic Mosaics
(Evening Sky in Debre Tabor)

01/20/2025 19
Panoramic Mosaics

+ +…+ =

01/20/2025 20
Applications: Special Effects

01/20/2025 21
Application: military

• Shoot this, not that.


• Detect enemy soldiers and vehicles
• Missile guidance
• Battle field awareness (send the missile to
an area, based on locally acquired image
data, boom! )
01/20/2025 22
Applications

• Augmented Reality (AR), fingerprint recognition, Iris scanning, forensic


• 3D Reconstruction
• Face Recognition, face finding
• Character Recognition, debluring
• Medical Imaging
• Autonomous Vehicle, unmaned aerial vehicles
• Robotics
• Automated Survelliance, movie post processing

01/20/2025 23
Applications: Image restoration(in
painting)

01/20/2025 24
Very multidisciplinary

Computer
Graphics

HCI

01/20/2025 25
IMAGE

01/20/2025 26
Digital images
• Electronic snapshots taken of a scene or scanned from documents, such as
photographs, manuscripts, printed texts, and artworks.
• Pixel Resolution: The resolution of a digital image refers to the number of
pixels it contains.
• Color Representation: Digital images can represent colors using different color
models.
• Image Formats: Digital images are typically stored in specific file formats.
• Image Compression:
• Image Editing: Digital images can be edited and manipulated using image
editing software.
• Metadata: Digital images can contain metadata, which includes additional
information about the image, such as the camera settings, date and time of
capture, geolocation data, and author information.
01/20/2025 27
Geometric primitives
• Points
• Lines
• Ray
• Planes
• Circle
• Polygones
• Shapes

01/20/2025 28
lighting
• Images cannot exist without light. To produce an image, the scene
must be illuminated with one or more light sources
• Two light sources
• Point light source
• Originates in a single location in space (eg. A small light bulb), potentially at infinity (eg.
Sun)
• Area light source
• A simple area light source such as a fluorescent ceiling light fixture with a diffuser can be
modeled as a finite rectangular area emitting light equally in all directions

01/20/2025 29
Chapter 2: Digital image files
• A) additive colors red, green, and blue can be mixed to produce cyan,
magenta, yellow, and white;
• B) subtractive colors cyan, magenta, and yellow can be mixed to
produce red, green, blue, and black.

01/20/2025 30
Primary colors
• Colors that cannot be obtained by mixing any other colors in any
proportions
• RGB
• Basic colors of light
• All other colors are made by mixing primary colors in suitable
proportions
• Building blocks of all other colors in the spectrum

01/20/2025 31
Composite colors (secondary)
• Colors produced by mixing any two primary colors of light
• CYMK

01/20/2025 32
Complementary colors
• Two colors which give white light when mixed together
• Example
• Red and cyan are complementary colors

01/20/2025 33
Color

01/20/2025 34
Adding various RGB color
components together

01/20/2025 35
Color models: RGB and CMYK
• RGB color models:
• Applies to computers, televisions, and electronics
• An additive model
• Colors are created through light waves that are added together in particular combination
to produce colors
• CYMK color models:
• Applies to painting and printing
• A subtractive model
• Colors are created thru absorbing wavelengths of visible light

01/20/2025 36
pixels

01/20/2025 37
Pixel
• Picture Element
• The smallest unit of a digital image or graphic that can be displayed
and represented on a digital display device
• Pixels are combined to form a complete image, video, text or any
visible thing on a computer display
• The smallest single component of a digital image
• Pixels store color info for the image.
• Color values are vectors

01/20/2025 38
“Picture element” at location (x,y),
value or color c

01/20/2025 39
Resolution
• The number of pixels in an image: pixel count
• Example
• Monitor resolution: 1280 * 1024 (multiplied gives: 1310720 pixel=1.3 megapixel)
meaning: there are 1280 pixels from one side to the other and 1024
pixels from top to bottom.
==1,310,720 pixels===1.31.megapixels (mega is million)
Resolution: the amount of detail in an image

01/20/2025 40
Measurement of resolution
• Digital cameras: pixels
• Scanners: ppi(pixel per inch)
• Printers: dpi(dots per inch)

01/20/2025 41
Image file formats
• JPEG(Joint Photographic Experts Group: or JPG)
• Images that have been compressed to store a lot of info in a small-size file
• Most digital cameras store photos in JPEG format (to store more photos)
• Lossy compression (some details are lost)
• GIF(Graphic Interchange format)
• Lossless compression
• Good for the web, not for printing as it has limited color range
• Good for animation, not for photography
• PNG(Portable Network Graphics)
• For web images, lossless compression
• TIFF (Tagged Image File Format)
• Very large file sizes, uncompressed
01/20/2025 42
Chapter 2: Digital Image
• Deals with developing a digital system that performs operations on an
digital image

01/20/2025 43
What is an image?
• Defined by the mathematical function f(x,y) : continuous function of
two variables
• X and y are two co-ordinates horizontally and vertically.
• The value of f(x,y) at any point gives the pixel value at that point of an
image.
• An image is a 2D array of numbers ranging between 0 and 255
(sometimes higher)
• 1 byte=8bits ( hence 28-1 =255)
• an image is an array, or a matrix pixels arranged in rows and columns.

01/20/2025 44
Pixel representation ways:
Grayscale & Color
• In a grayscale, each pixel has a value between 0 and 255, where 0
corresponds to “black” and 255 corresponds to “white”.
• The values in between 0 and 255 are varying shades of gray, where values
closer to 0 are darker and values closer to 255 are lighter

01/20/2025 45
Cont…
• Color pixels are normally represented in the RGB color space – one
value for the Red component, one for green and one for Blue. Other
color spaces do exist as well.
• Each of the 3 colors is represented by an integer in the range 0 to 255,
which indicates how “much” of the color there is.

White color: (255, 255, 255) (fill up each of the RGB buckets
completely)
Black Color: (0,0,0) (black is the absence of color)
Pure Red Color : (255,0,0)
Yellow: (255,255,0)
An image is represented as a grid of pixels.

01/20/2025 46
Images as matrices

01/20/2025 47
A brief review of Matrices and
Vectors
• Matrices
• Vectors

01/20/2025 48
Matrices
• An m X n (read “m by n”) matrix, denoted by A, is a rectangular array
of entries or elements (numbers, or symbols representing numbers)
enclosed typically by square brackets
• m is the number of horizontal rows
• n is the number of vertical columns in the array

01/20/2025 49
Matrix operations: Addition and
subtraction
• Addition and subtraction
• Add/subtract the corresponding entries together

01/20/2025 50
Matrix multiplication
• We multiply and add the elements as follows. We work across the 1st
row of the first matrix, multiplying down the 1st column of the
second matrix, element by element. We add the resulting products.
Our answer goes in position a11 (top left) of the answer matrix.

01/20/2025 51
Example: fill the empty entries.

01/20/2025 52
Transpose of a matrix
• The transpose of an m x n matrix A, denote AT, is an n x m matrix
obtained by interchanging the rows and columns of A.
• write the rows of A as the columns of AT,
• write the columns of A as the rows of AT.

01/20/2025 53
Determinant of a matrix
• The determinant of a matrix A is denoted det(A), det A, or |A|.
• 2 x 2 matrix

• 3 x 3 matrix

01/20/2025 54
pattern

01/20/2025 55
Inverse of a matrix (reading
assignment)

01/20/2025 56
Inverse of a matrix
• The inverse of a general n×n matrix A can be found by using the
following equation.

01/20/2025 57
A color image
• A color image can be represented as a percentage of the RGB “intensities”
combined. It can also be represented as a set of HSL values
• What is HSL?
Hue: represents the type of color or its position on the color wheel. It is
measured in degrees, ranging from 0 to 360. In this model, 0 degrees
corresponds to red, 120 degrees corresponds to green, and 240 degrees
corresponds to blue. The values in-between represent the intermediate colors.
Saturation: refers to the intensity or purity of a color. It determines how much
gray is mixed with the hue. Saturation is represented as a percentage, with 0%
being completely gray or unsaturated and 100% being fully saturated or
vibrant.
Lightness: represents the brightness or darkness of a color.

01/20/2025 58
HSL==human way of understanding
colors
• Hue: Hue is the actual color
• Hue is the color of a point, as found along the spectrum.

01/20/2025 59
Saturation
• Indicates the amount of grey in a color
• Refers to the strength of a color
• The intensity of a color ( a hue)

01/20/2025 60
Luminance(brightness)
• Refers to how much white (or black) is mixed in the color
• Refers to the reflective brightness of colors.
• A measure of how bright or dark a hue is.

01/20/2025 61
Chapter 3: Operations on
Images
• Image enhancement
• Image segmentation
• Edge detection
• Image classification
• Image stitching
• Image recognition

01/20/2025 62
Image enhancement
• The process of adjusting digital images so that the results are more suitable for
display or further processing. Image enhancement refers to the process of
highlighting certain information of an image, as well as weakening or removing any
unnecessary information according to specific needs. For example, eliminating
noise, revealing blurred details, and adjusting levels to highlight features of an
image. Applications of image enhancement includes:
• Noise removal
• Smooth and sharpen, brighten an image
• Deblurring
• Color correction
• Filtering and Correct for poor contrast and unbalanced colors
The procedure of improving the quality and the information content of original data
before processing.
01/20/2025 63
Methods of image enhancement
Image enhancement techniques can be divided into two broad categories. i.e
Spatial domain vs Frequency domain
A. Histogram equalization
• What is image Histogram?
• Histogram is a graphical representation of the intensity distribution of an image.
• Shows how many times each intensity value in image occurs
• Graphical representation of the tonal distribution in an image.
• It plots the number of pixels in the image (vertical axis) with a particular brightness or
tonal value (horizontal axis).

01/20/2025 64
Histogram

01/20/2025 65
Histogram of a grayscale image

01/20/2025 66
Cont.…
• A method in image processing of contrast adjustment using the
image’s histogram.
• The process of adjusting intensity values in an image.
• The process of making the histogram flat.
• It’s a method that improves the contrast in an image, in order to
stretch out the intensity range.
• Problem is:
• Noise in dark regions can be amplified and become more visible.

01/20/2025 67
After equalization

01/20/2025 68
Image Segmentation
• Image segmentation is the process of dividing an image into multiple
meaningful and semantically coherent regions or segments. The goal of image
segmentation is to partition the image into regions that correspond to
objects, boundaries, or other significant structures within the image. It is a
fundamental task in computer vision and plays a crucial role in various
applications such as object recognition, scene understanding, medical
imaging, and autonomous driving.
• There are different approaches to image segmentation, including:
• Thresholding: Thresholding is a simple and commonly used technique where
a threshold value is applied to the image to separate regions based on
intensity or color. Pixels above the threshold are assigned to one segment,
while pixels below the threshold belong to another segment. It is effective
when there is a clear distinction between foreground and background based
on intensity or color.
01/20/2025 69
• Cont…
• Edge-based segmentation: Edge-based techniques aim to detect and localize
boundaries or edges between different regions in an image. Methods such
as the Canny edge detector, Sobel operator, or Laplacian of Gaussian (LoG)
can be used to identify edges. Once the edges are detected, further
processing can be applied to segment the image based on the detected
edges.
• Region-based segmentation: Region-based methods group pixels or regions
together based on certain criteria such as color similarity, texture, or pixel
connectivity. Examples include the popular watershed algorithm, which
treats intensity values as a topographic surface and uses flooding to separate
regions, and the mean-shift algorithm, which iteratively shifts pixels to local
modes of density in feature space.
01/20/2025 70
• Cont.….
• Clustering-based segmentation: Clustering techniques such as k-means clustering or
Gaussian mixture models can be used to group pixels based on their feature
similarity. Each cluster represents a distinct segment in the image. Clustering can be
performed in various feature spaces, including color space, texture space, or a
combination of multiple feature spaces.
• Deep Learning-based segmentation: With the advent of deep learning, convolutional
neural networks (CNNs) have demonstrated impressive performance in image
segmentation tasks. Fully Convolutional Networks (FCNs) and U-Net architectures
are widely used for pixel-wise semantic segmentation. These models learn end-to-
end mappings from input images to segmentation maps by leveraging large labeled
datasets.
• Image segmentation can be a challenging task, especially when dealing with
complex scenes, occlusions, or variations in lighting and viewpoint. Often, a
combination of segmentation techniques or the integration of multiple algorithms is
used to achieve accurate and robust segmentation results. Evaluation metrics such
as Intersection
01/20/2025 over Union (IoU) or Dice coefficient are commonly used to assess 71 the
B. Edge detection
• An edge is the boundary between an object and the background, and
indicates the boundary between overlapping objects.
• The process of locating the edge pixels, and edge enhancement is the
process of increasing the contrast between the edges and the
background so that the edges become more visible.
• If the edges in an image can be identified accurately,
• All the objects can be located
• Basic properties such as area, perimeter, and shape can be measured.

01/20/2025 72
Contd.
• Edge detection refers to the process of identifying and locating sharp
discontinuities in an image. The discontinuities are abrupt changes in
pixel intensity which characterize boundaries of objects in a scene
• In an image, an edge is a curve that follows a path of rapid change in
image intensity. Edges are often associated with the boundaries of
objects in a scene. Edge detection is used to identify the edges in an
image.
• Qualitatively, edges occur at boundaries between regions of different
color, intensity, or texture.

01/20/2025 73
Edge detection
Edges characterize boundaries and are therefore a problem of
fundamental importance in image processing.
Edges in images are areas with strong intensity contrasts – a jump in
intensity from one pixel to the next.

01/20/2025 74
Some Types of edges
• Step
• Ramp
• Spike
• Roof

01/20/2025 75
Edge models based on their
intensity profiles
• Step edges: involves a transition between two intensity levels
occurring ideally over the distance of 1 pixel
• Ramp edges:
• Roof edges: determined by the thickness and sharpness of the line.

01/20/2025 76
Ramp vs step vs Roof edges

01/20/2025 77
Stages in edge detection
• Filtering
• To remove the noise (undesirable effects and irregularities )
• Differentiation
• highlight the locations in the image where intensity changes are significant
• Detection
• localizing points where the intensity changes are significant
• Detect peaks in the edge points

01/20/2025 78
Common edge detection algorithms
• Edge detection is used for image segmentation and data extraction in
areas such as image processing, computer vision, and machine vision.
Common edge detection algorithms include :
• Sobel:
• Canny:
• Prewitt:
• Roberts:
• fuzzy logic methods:
• Laplacian: performs second-order derivatives and hence are sensitive
to noise. To avoid this sensitivity to noise, before applying this
method, Gaussian smoothing is performed on the image
01/20/2025 79
Canny method
• In 1986, John Canny defined a set of goals for an edge detector and
described an optimal method for achieving them.
Canny specified three issues that an edge detector must address. In
plain English, these are:
Error rate — The edge detector should respond only to edges, and
should find all of them; no edges should be missed.
Localization — The distance between the edge pixels as found by the
edge detector and the actual edge should be as small as possible.
Response — The edge detector should not identify multiple edge
pixels where only a single edge exists.

01/20/2025 80
Sobel and Canny methods
Image segmentation
using Sobel method Image segmentation using Canny method

01/20/2025 81
In Matlab
• BW = edge(I, method) detects edges in image I using the edge-
detection algorithm specified by method.
• Method can be [sobel or canny]

01/20/2025 82
Canny's Edge Detector Criteria's
It is a multi-stage algorithm used to detect/identify a wide range of edges, i.e.
• Convert the image to grayscale
• Reduce noise – as the edge detection that using derivatives is sensitive to noise
• Calculate the gradient – helps identify the edge intensity and direction.
• Non-maximum suppression – to thin the edges of the image.
• Double threshold – to identify the strong, weak and irrelevant pixels in the images.
• Hysteresis edge tracking
• Good detection
• There should be a low probability of failing to mark edge points, and low probability of falsely
marking non-edge points
• Good localization
• The points marked as edges by the operator should be as close as possible to the center of
the true edge.
• Only one response to a single edge
• When two nearby operators respond to the same edge, one of them must be considered a
01/20/2025 83
false edge.
Edge tracing
• The process of following the edges, usually collecting the edge pixels
into a list.

01/20/2025 84
Image Noise
• Random speckles on a smooth surface that can seriously affect the
quality of the image.
• The level of noise usually increases depending on the length of
exposure, the physical temperature, and the sensitivity setting of the
camera.
• Aberrant pixels
• Noise means, pixels within the picture present different intensity
values rather than correct pixel values.

01/20/2025 85
Image classification
• is a fundamental task in vision recognition that aims to understand and
categorize an image as a whole under a specific label. Unlike object detection,
which involves classification and location of multiple objects within an image,
image classification typically pertains to single-object images.
• The process of image classification typically involves the following steps:
• Dataset Preparation: Collecting and preparing a labeled dataset for training
and evaluation. This dataset consists of a set of images along with their
corresponding class labels.
• Feature Extraction: Extracting meaningful features from the images to
represent their visual content. Features can be extracted using various
techniques, such as handcrafted feature descriptors (e.g., Histogram of
Oriented Gradients, Scale-Invariant Feature Transform) or learned features
from deep convolutional neural networks (CNNs) through transfer learning.
01/20/2025 86
Cont.…
• Model Training: Training a classification model using the labeled training
dataset. Commonly used models include support vector machines (SVM),
decision trees, random forests, and deep learning models like CNNs. During
training, the model learns to map the extracted features to the
corresponding class labels.
• Model Evaluation: Evaluating the trained model on a separate labeled test
dataset to measure its performance. Common evaluation metrics include
accuracy, precision, recall, and F1 score. The evaluation helps assess the
model's ability to generalize and make accurate predictions on unseen
images.
• Model Deployment and Prediction: Once the model is trained and evaluated,
it can be deployed to make predictions on new, unseen images. The trained
model takes an input image, applies the feature extraction process, and then
uses the learned mapping to predict the class label of the image
01/20/2025 87
Cont…
Recent advancements in deep learning, convolutional neural networks (CNNs)
have become the dominant approach for image classification. Deep learning
models have achieved remarkable performance on various image classification
benchmarks, often surpassing traditional methods.
• Training a high-performing image classification model requires a large and
diverse labeled dataset, appropriate feature extraction techniques, and
careful selection of the classification model architecture. Regularization
techniques like dropout and data augmentation can also help improve model
generalization and performance.

01/20/2025 88
Image stitching
• The image stitching process typically involves these steps:
• Image Acquisition: Capture a series of overlapping images of the scene using
a camera or any other imaging device. It is important to have sufficient
overlap between images to allow for accurate alignment and blending
during the stitching process.
• Feature Extraction: Extract distinctive features from each of the input
images. These features can be key points, corners, or other salient points
that can be reliably detected across images. Common feature detection
algorithms include the Scale-Invariant Feature Transform (SIFT) or Speeded
Up Robust Features (SURF).

01/20/2025 89
• Cont…
• Feature Matching: Match corresponding features between pairs of images. Various
techniques can be used for feature matching, such as nearest neighbor matching,
RANSAC (Random Sample Consensus), or robust estimation methods.
• Image Alignment: Estimate the transformations (e.g., translations, rotations, scale
changes) needed to align the images properly. This step involves applying geometric
transformations to the images to align them based on the feature correspondences.
Techniques like homography estimation or affine transformations are commonly used for
image alignment.
• Image Blending: Blend the aligned images together to create a seamless transition
between the overlapping regions. This step involves removing visible seams or artifacts
caused by misalignments or inconsistencies in lighting and exposure. Techniques such as
feathering, gradient-based blending, or multi-band blending can be used for smooth
blending.
• Final Image Adjustment: Perform any necessary adjustments to the stitched image, such
as color correction, exposure balancing, or perspective correction, to achieve a visually
pleasing and coherent result.
01/20/2025 90
Chapter 4: Image noise
• The result of errors in the image acquisition process that result in
pixel values that do not reflect the true intensities of the real scene.
• Is random variation of brightness or color information in images, and
is usually an aspect of electronic noise.
• Can be produced by the sensor and circuitry of a scanner or digital
camera.
• Is an undesirable by-product of image capture that obscures the
desired information.

01/20/2025 91
Noise can occur
• During electronic transmission of image data
• During gathering data
• During scanning (damage to the film)
• When taking low-light photos or indoor dark scenes

01/20/2025 92
Difficult to predict accurately?
• Noise cannot be predicted accurately because of its random nature,
and cannot even be measured accurately from a noisy image, since
the contribution to the grey levels of the noise can’t be
distinguished from the pixel data.
• However, noise can sometimes be characterized by its effect on the
image and is usually expressed as a probability distribution with a
specific mean and standard deviation.

01/20/2025 93
Types of noises (General)
• Shot noise
• Randomness due to photons in the scene you are photographing, which are
discreet and random
• Light emits and reflects off everything you can see, but it does not happen in
a fixed pattern, and graininess is the result.
• Digital noise
• Randomness caused by your camera sensor and internal electronics, which
introduce imperfections to an image.

01/20/2025 94
Types: based on image acquisition
and transmission
• Salt and Pepper Noise (impulse noise/spike noise/random noise/
independent noise)
• randomly scattered white or black pixel over images.
• Is an impulse type of noise
• It’s actually the intensity spikes
• This type of noise comes due to errors in data transmission.
• Occurs because of sharp and sudden changes of image signal
• Caused by malfunctioning of pixel elements in the camera sensors, faulty memory
locations or timing errors in the digitization process.

To add salt and pepper noise to an image: use imnoise (in matlab)

01/20/2025 95
Salt-and-pepper noise example

01/20/2025 96
Contd.
• Poisson noise (shot noise)
• The noise caused when number of photons sensed by the sensor is not
sufficient to provide detectable statistical information.
• Randomness due to photons in the scene you are photographing, which are
discreet and random
• Light emits and reflects off everything you can see, but it does not happen
in a fixed pattern, and graininess is the result.

01/20/2025 97
Noise
• Signal-independent noise
• A random set of grey levels, statistically independent of the image data,
added to the pixels in the image to give the resulting noisy image.
• This kind of noise occurs when an image is transmitted electronically from
one place to another.
• If A is a perfect image and N is the noise that occurs during transmission, then
the final image B is:
B=A+N

01/20/2025 98
Contd.
• Signal-dependent noise
• the level of the noise value at each point in the image is a function of the grey
level there

01/20/2025 99
Ways to remove or reduce noise in
an image (In LAB)
• Linear filtering
• Median filtering
• Adaptive filtering

01/20/2025 100
Video noise (reading assignment)

01/20/2025 101
Blurring and Deblurring

01/20/2025 102
Deblurring
• Images can be distorted by blur, such as motion blur or blur resulting
from an out-of-focus lens.

01/20/2025 103
Image Deblurring
• The blurring, or degradation, of an image can be caused by many
factors:
• Movement during the image capture process, by the camera or, when long
exposure times are used, by the subject
• Out-of-focus optics, use of a wide-angle lens, atmospheric turbulence, or a
short exposure time, which reduces the number of photons captured
• Scattered light distortion in confocal microscopy

01/20/2025 104
Cont…
• A blurred or degraded image can be approximately described using this
equation
G=HF +N
• G= the blurred image
• H= distortion operator, also called the PSF(point spread function). PSF
describes the degree to which an optical system blurs (spreads) a point
of light.
• F= Original true image
• N= additive noise, introduced during image acquisition, that corrupts the
image.
01/20/2025 105
Degradation Model

Assume a linear system, we can model a blurred image by:

g(x,y) = f(x,y) * h(x,y) + n(x,y), where h(x,y) is called as PSF

01/20/2025 106
Deblurring
• The process of removing blurring effects from images, caused for
example by defocus aberration or motion blur
• Methods (we would see it in LAB)
• Wiener Filter
• Regularized Filter
• Blind Deconvolution algorithm
Example: originalRGB = imread('peppers.png');
imshow(originalRGB)
h = fspecial('motion', 50, 45);
filteredRGB = imfilter(originalRGB, h);
figure, imshow(filteredRGB)

01/20/2025 107
Zooming
• Enlarging a picture in a sense that the details in the image became
more visible and clear

01/20/2025 108
Methods of zooming (common ones)
• Pixel replication
• Zero order hold method
• Zooming K times

01/20/2025 109
Pixel replication (NN interpolation)
• just replicate the neighboring pixels
• In this method we create new pixels from the already given pixels.
Each pixel is replicated in this method n times row wise and column
wise and you got a zoomed image
• Example:
If you have an image of 2 rows and 2 columns and you want to zoom
it twice or 2 times using pixel replication, here how it can be done.

01/20/2025 110
Row-wise zooming
• When we zoom it row wise, we will just simple copy the rows pixels to
its adjacent new cell.
• each pixel is replicated twice in the rows

01/20/2025 111
Column-wise zooming

01/20/2025 112
Example

01/20/2025 113
Advantage and disadvantage
• Advantage: very simple. You just have to copy the pixels and nothing
else.
• The disadvantage of this technique is that image got zoomed but the
output is very blurry. And as the zooming factor increased, the image
got more and more blurred. That would eventually result in fully
blurred image.

01/20/2025 114
Zero-order hold method
• pick two adjacent elements from the rows respectively and then we
add them and divide the result by two, and place their result in
between those two elements. We first do this row wise and then we
do this column wise.

01/20/2025 115
K-times zooming
• First of all, you have to take two adjacent pixels as you did in the
zooming twice. Then you have to subtract the smaller from the greater
one. We call this output (D).
• Divide the output(D) with the zooming factor(K). Now you have to add
the result to the smaller value and put the result in between those two
values.
• Add the value D again to the value you just put and place it again next to
the previous putted value. You have to do it till you place k-1 values in it.
• Repeat the same step for all the rows and the columns , and you get a
zoomed images.

01/20/2025 116
Example: 3-times zooming

01/20/2025 117
Transformation of an image
• Translation
• Rotation
• scaling
• Affine transform
• Image pyramids

01/20/2025 118
Matrix-Vector Product (Revisited)

01/20/2025 119
A transformation that does nothing

01/20/2025 120
Translation
• A translation moves a vector a certain distance in a certain direction.

01/20/2025 121
Translate a vector by (X,Y,Z)

01/20/2025 122
Example: translate v(10,10,10,1) of
10 units in the X direction

01/20/2025 123
Scaling
• A scale transformation scales each of a vector's components by a
(different) scalar. It is commonly used to shrink or stretch a vector as
demonstrated below.

01/20/2025 124
To scale a vector by 2

01/20/2025 125
Reminder: Identity matrix

01/20/2025 126
To scale a given vector by
(SX,SY,SZ)

01/20/2025 127
Rotation of an image for an angle θ
•.

01/20/2025 128
Segmentation
• Partitioning images into segments
• Image segmentation is the task of finding groups of pixels that “go
together”
• It involves dividing a visual input into segments to simplify image
analysis. Segments represent objects or parts of objects, and
comprise sets of pixels, or “super-pixels”.
• divide an image into parts that have a strong correlation with objects
or areas of the real world contained in the image
• Used to understand what is in a given image at a pixel level.

01/20/2025 129
• Regions (compact sets) represent spatial closeness naturally and thus
are important building steps towards segmentation
• Objects in a 2D image very often correspond to distinguishable regions.
• The object is everything what is of interest in the image (from the
particular application point of view). The rest of the image is background.
• Image segmentation is the process of partitioning a digital image into
multiple segments (sets of pixels, also known as image objects).
• The process of assigning a label to every pixel in an image such that pixels
with the same label share certain characteristics.

01/20/2025 130
• The purpose of image segmentation is to partition an image into
meaningful regions with respect to a particular application.

01/20/2025 131
Segmentation
• A clustering of pixels of a similar category

01/20/2025 132
Complete vs. partial segmentation
• Complete segmentation
• results in a set of disjoint regions corresponding uniquely with objects in the
input image
• A complete segmentation of an image R is a finite set of regions R1, . . . , RS,
R i n Rj = Ø
• Partial segmentation
• an image is divided into separate regions that are homogeneous with respect
to a chosen property such as brightness, color, reflectivity, texture, etc.

01/20/2025 133
How does image segmentation
works
• By dividing or partitioning the image into various parts called
segments.
• An image is a collection or set of different pixels. Group together the
pixels that have similar attributes using image segmentation.

01/20/2025 134
Two types of image segmentation:
Semantic and Instance

01/20/2025 135
Types contd.
• Semantic segmentation
• Classifies pixels of an image into meaningful classes
• Detect objects within the input image, isolate them from the background and
group them based on their class.
• Instance segmentation
• Identifies the class of each object in the image.
• Detect each individual object within a cluster of similar objects, drawing the
boundaries for each of them

01/20/2025 136
Approaches
• What relationship must a given pixel have wrt its neighbors and other pixels
in the image in order that it be assigned to one region or another?
• Basic approaches:
1. Edge/boundary methods (Discontinuity detection): This approach is
based on the detection of edges as a means to identifying the boundary
between regions. As such, it looks for sharp differences between groups of
pixels. The algorithm rather searches for discontinuity.

2. Region-based methods (Similarity detection): This approach assigns pixels


to a given region based on their degree of mutual similarity.
• relies on detecting similar pixels in an image – based on a threshold, region growing,
region spreading, and region merging.
01/20/2025 137
Contd.
• Discontinuity - isolated points, lines and edges of image.
• Similarity - thresholding, region growing, region splitting and merging

01/20/2025 138
Basic properties/qualities in images
used in image segmentation
• Color
• Simplest and most obvious way of discriminating between objects and
background
• Example: segmentation based on greyscale
• Texture (patterns of grey)
• Motion
• Depth

01/20/2025 139
Challenges in segmentation
• Noisy boundaries
• Grouping pixels that belong to a category may not be as accurate
due to the fuzzy edges of an object. As a result, objects from different
categories are clustered together.
• Cluttered scene
• With several objects in the image frame, it becomes harder to
classify pixels correctly. With more clutter, the chances of false positive
classification also increase

01/20/2025 140
basic applications of image
segmentation
• Content-based image retrieval,
• Remote sensing: (look for ships in the water.
• Medical imaging (blood vessels are roughly parallel)
• Object detection and Recognition Tasks,
• Automatic traffic control systems and Video surveillance, etc.

01/20/2025 141
Image Segmentation Technique:
Region based segmentation
• One simple way to segment different objects could be to use their
pixel values. An important point to note – the pixel values will be
different for the objects and the image’s background if there’s a
sharp contrast between them.
• In this case, we can set a threshold value. The pixel values falling
below or above that threshold can be classified accordingly (as an
object or the background). This technique is known as Threshold
Segmentation.

01/20/2025 142
01/20/2025 143
Local vs Global threshold
• If we want to divide the image into two regions (object and
background), we define a single threshold value. This is known as
the global threshold.

• If we have multiple objects along with the background, we must
define multiple thresholds. These thresholds are collectively known as
the local threshold.

01/20/2025 144
• Thresholding is the transformation of an input image f to an output (segmented) binary
image g:
g(i,j)= 1 for f(i,j) > T,
g(i,j) =0 for f(i,j) <= T,
where T is the threshold, g(i, j) = 1 for image elements of objects, and g(i, j) = 0 for image
elements of the background (or vice versa).

Algorithm: Basic thresholding


Search all pixels f(i, j) of the image f. A pixel g(i, j) of the segmented image is
an object pixel if f(i, j) ≥ T , and is a background pixel otherwise.

01/20/2025 145
Threshold methods
• Simple thresholding
• Otsu’s Binarization
• Adaptive thresholding

01/20/2025 146
Simple thresholding (basic)
• This technique replaces the pixels in an image with either black or
white.
• If the intensity of a pixel (Ii,j) at position (i,j) is less than the threshold
(T), then we replace that with black and if it is more, then we replace
that pixel with white. This is a binary approach to thresholding.

01/20/2025 147
Otsu’s Binarization
• let us take an image whose histogram has two peaks (bimodal image),
one for the background and one for the foreground.
According to Otsu binarization, for that image, we can approximately
take a value in the middle of those peaks as the threshold value. So in
simply put, it automatically calculates a threshold value from image
histogram for a bimodal image.
• Otsu’s Binarization is widely used in document scans, removing
unwanted colors from a document, pattern recognition etc.

01/20/2025 148
Adaptive threshold
• What if an image has different background and foreground lighting
conditions in different actionable areas? We need an adaptive
approach that can change the threshold for various components of
the image.
• the algorithm divides the image into various smaller portions and
calculates the threshold for those portions of the image.

01/20/2025 149
Other techniques [reading
assignment]
• Edge detection segmentation
• Makes use of discontinuous local features of an image to detect edges and
hence define a boundary of the object.
• Segmentation based on clustering
• Divides the pixels of the image into homogeneous clusters.

01/20/2025 150
Image classification
• the task of assigning a label to an image from a predefined
set of categories.
• Identifying which class an input image belongs to
• A method to classify images into their respective category classes

01/20/2025 151
Example: categories={cat, dog,
panda}

01/20/2025 152
More formally
• Given our input image of W × H pixels with three channels, Red,
Green, and Blue, respectively,
our goal is to take the W ×H ×3 = N pixel image and figure out how to
correctly classify the contents of the image.

01/20/2025 153
Terminologies in classification
• Dataset: collection of images
• Data point: each image in the dataset
• Training set: a subset to train a model
• Testing set: a subset to test the trained model
• data set used to evaluate the model developed from a training set

01/20/2025 154
Example

01/20/2025 155
Principles and steps
• Gather your dataset
• Each category should be approximately uniform(the same number of examples per category)
• Avoid class imbalance
• Split your dataset into: a training and testing set
• A training set is used by our classifier to “learn” what each category looks like by making predictions on the
input data and then correct itself when predictions are wrong. After the classifier has been trained, we can
evaluate the performing on a testing set
• Make sure that your test set meets the following conditions
• Is large enough to yield statistically meaningful results
• Is representative of the data set as a whole. In other words, don’t pick a test set with different characteristics
than the training set.
• Never train on test data
• It’s extremely important that the training set and testing set are independent of each other
and do not overlap!

01/20/2025 156
Common split sizes for training and
testing sets

01/20/2025 157
Challenges: factors of variations
• Viewpoint variation
• an object can be oriented/rotated in multiple dimensions with respect to how the
object is photographed and captured
• Scale variation: small, large, grande, zoomed, etc…
• Deformation: elastic, stretchable, twist, bend, contort, shrink
• Occlusions
• where large parts of the object we want to classify are hidden from view in the image
• Illuminations: difference in lighting during capturing
• Background clutter : background closely resembles some parts of the image
• Intra-class variation: think of variations of chairs (at home, at school,etc..)
01/20/2025 158
Concepts related to classification
• Machine learning
• Deep learning
• Neural networks (NN)
• Convolutional Neural Networks (CNN)
• Artificial Neural Network

Read about them

01/20/2025 159
Machine learning (off-topic)
• Is about teaching computers/machines how to learn from data to
make decisions or predictions
• A field of study that gives the ability to the computer for self-learn
without being explicitly programmed
• Imbuing (filling, impregnating) knowledge to machines without hard-
coding it.
• Types: 3 types
• Supervised: task driven (predict next value)
• Unsupervised: data driven (Identify clusters)
• Reinforcement : learn from mistakes
01/20/2025 160
Types (contd.)
• Supervised learning
• Labelled data used to train the algorithms
• Algorithms are trained using marked data, where the input and output are
known
• Unsupervised learning (self-taught learning)
• Unlabeled data are used to train the algorithm

01/20/2025 161
Related Concepts you need to know
• image classification
• Video Noise
• -NNs
• -AI
• Text Classification
• Deep Learning
• Data Science
• Machine Learning
• Cognitive Vision
• AR
• Image Captioning
• Video Segmentation
• Face Recognition
01/20/2025 162
Chapter 6: Image data compression
• Image compression is a useful way to compress or reduce the size of images to help
manage storage space and accelerate the speed of data transmission. In simple words,
image compression involves converting an image file size to achieve less space on the
server than the original ones.
• It reduces the file size in bytes without hurting images’ quality so that they can be
stored in a given amount of disk or memory space. Various image compression
algorithms are available that help compress an image to the smallest minimum size
possible while retaining quality, such as Lossy & Lossless Image Compression methods. ‍
• An encoder performs the process of compressing an image. When image size is
reduced, some inessential technical data like colours or pixels are removed to decrease
the file size. The main idea is to cut out the “less critical” image data belonging to an
image to make it small enough without compromising on its visual aspects.
• ‍

01/20/2025 163
Types of Image compression
• Lossy vs Lossless
• Lossy Compression
• Lossless Compressions

01/20/2025 164
Data redundancy

01/20/2025 165

You might also like