U5 6 Introduction To Image Processing Computer Vision
U5 6 Introduction To Image Processing Computer Vision
Image
Processing &
Computer
Insert Your Picture Here
Vision
M Yogi Reddy
Assistant Professor
CSE Department
School Of Technology
GITAM(Deemed to be ) University
Hyderabad
PAGE 1
Agenda
PAGE 2
PAGE 3
Introduction to Image processing
Image processing is a method to perform some operations on an
image, in order to get an enhanced image or to extract some useful
information from it.
It is a type of signal processing in which input is an image and output
may be image or characteristics/features associated with that image.
Image processing basically includes the following three steps:
1. Importing the image via image acquisition tools;
2. Analyzing and manipulating the image;
3. Output in which result can be altered image or report
that is based on image analysis.
PAGE 4
What is image acquisition
PAGE 5
PAGE 6
Computer Vision
To emulate human vision, including learning and being able to make inferences
and take actions based on visual inputs.
It is concerned with the automatic extraction, analysis and understanding of useful
information from a single image or a sequence of images.
PAGE 7
Computer Vision
PAGE 8
PAGE 9
Image is defined as
PAGE 10
The given figure is an example of
digital image that you are now
viewing on your computer screen.
But actually , this image is nothing
but a two dimensional array of
numbers ranging between 0 and 255.
There are two types of methods used
for image processing namely,
1. Analogue image processing
2. Digital image processing
PAGE 11
Analog image processing is applied on analog signals and it processes
only two-dimensional signals. The images are manipulated by electrical
signals. Examples of analog images are television images, photographs,
paintings, and medical images etc.
PAGE 12
Types of images
PAGE 13
Gray-scale images
Each pixel value of gray scale images normally
from 0 (black) to 255 (white).
0 0 0
PAGE 14
Color images
In color images each pixel has a particular color ; that color
being described by the amount of red, green and blue in it.
Each of these components has a range 0 to 255.
PAGE 15
Image Noise
Noise represents unwanted information in an image.
Image noise is random variation of brightness or color information in the
images captured.
It is degradation in image signal caused by external sources.
Where,
A(x , y)= function of noisy image
H(x , y)= function of image noise
B(x , y)= function of original image
PAGE 16
Sources of Image Noise
PAGE 17
Types of image noise
PAGE 18
Photon/shot/Poisson Noise
This is a type of noise connected to the uncertainty associated with
the measurement of light.
When the number of photons sensed by the sensors from the
camera is not sufficient to get meaningful information from the
scene photon noise arises.
This noise occurs mostly in poor or low lighting conditions.
PAGE 19
Thermal/ Johnson-Nyquist/Gaussian Noise
Gaussian noise is evenly distributed over signal.
This means that each pixel in the noisy image is the sum of the true pixel
value and a random Gaussian distributed noise value.
Random Gaussian function is added to Image function to generate this
noise.
The noise is independent of intensity of pixel value at each point.
PAGE 20
Impulse/salt and Pepper Noise
Salt Noise: Salt noise is added to an image by addition of random bright (with
255 pixel value) all over the image.
Pepper Noise: Salt noise is added to an image by addition of random dark (with
0 pixel value) all over the image.
Salt and Pepper Noise: Salt and Pepper noise is added to an image by addition
of both random bright (with 255 pixel value) and random dark (with 0 pixel
value) all over the image.
PAGE 21
Speckle Noise
Speckle noise can be generated by multiplying random pixel values with
different pixels of an image.
P = I + n * I Where P is the speckle noise distribution image, I is the input
image and n is the uniform noise image by mean o and variance v.
Speckle is a granular noise that inherently exists in an image and degrades its
quality.
PAGE 22
Removal of Noise from Images
Image de-noising is very important task in image processing for the
analysis of images.
One goal in image restoration is to remove the noise from the image in
such a way that the original image is discernible.
Image de-noising is often used in the field of photography or publishing
where image was somehow degraded but needs to be improved before
it can be printed.
There are two types of noise removal approaches
1. Linear Filtering
2. Non Linear Filtering
PAGE 23
Linear Filtering:
Linear filters are used to remove certain types of noise.
These filters remove noise by convolving the original image with a
mask filters also tend to blur the sharp edges, destroy the lines and
other fine details of the image
Non-Linear Filtering:
Non- linear filter is a filter whose output is not a linear function of
its inputs.
The general idea in non-linear image filtering is that instead of using
the spatial mask in a convolution process, the mask is used to
obtain the neighboring pixel values, and then ordering mechanisms
produce the output pixel.
PAGE 24
Different types of linear and non-linear filters:
Mean Filter
Median Filter
Adaptive Filter
Gaussian Filter
Weinier Filter
PAGE 25
Image Enhancement
• Image enhancement is the process of adjusting digital images so that the results are
more suitable for display or further image analysis. For example, you can remove
Histogram equalization
Noise removal
PAGE 26
Image enhancement algorithms include de-blurring, filtering, and
contrast methods
PAGE 27
Color Enhancement
PAGE 28
Color Enhancement can be done by adjusting the red/green/blue values for a pixel
as follows:
1. find the maximum of the three input values
2. scale RGB values downward by taking a power of their fraction of this maximum.
3. Multiply the result value maximum of three input values
PAGE 30
Some of the practical applications of image segmentation are:
Object detection
Pedestrian detection
Face detection
Brake light detection
Locate objects in satellite images (roads, forests, crops, etc.)
Recognition Tasks
Face recognition
Fingerprint recognition
Iris recognition
Traffic control systems
Video surveillance
Video object co-segmentation and action localization
PAGE 31
Edge Detection
Edge detection is the process of finding the set of pixels that represent the boundary of disjoint
regions in an image.
Edge detection is important for image segmentation, since many image processing algorithms are first
required to identify the objects and then process them.
PAGE 32
PAGE 33
Optical Character Recognition
PAGE 34
WHAT IS OCR TECHNOLOGY?
OCR technology deals with the problem of recognizing all kinds of different
characters.
Both handwritten and printed characters can be recognized and converted into a
machine-readable, digital data format.
PAGE 35
Three basic steps of optical character recognition(OCR)
PAGE 36
Step 1: Image Pre-Processing in OCR
OCR software often pre-processes images to improve the chances of successful recognition.
The aim of image pre-processing is an improvement of the actual image data.
In this way, unwanted distortions are suppressed and specific image features are enhanced.
PAGE 38
Feature Detection
In computer vision and image processing feature detection includes methods for
computing abstractions of image information and making local decisions at every
image point whether there is an image feature of a given type at that point or not.
The resulting features will be subsets of the image domain, often in the form of
isolated points, continuous curves or connected regions.
Definition of Feature:
a feature is typically defined as an "interesting" part of an image, and features are
used as a starting point for many computer vision algorithms.
PAGE 39
Main Component of Feature
Detection
Description: The local appearance around each feature point is described in some
way that is (ideally) invariant under changes in illumination, translation, scale,
and in-plane rotation. We typically end up with a descriptor vector for each
feature point.
PAGE 40
Types of Image Features:
1. Edges
Edges are points where there is a boundary (or an edge) between two image regions.
In general, an edge can be of almost arbitrary shape, and may include junctions. In
practice, edges are usually defined as sets of points in the image which have a strong
gradient magnitude.
Locally, edges have a one-dimensional structure.
PAGE 42
Recognition
Image recognition is the ability of a system or software to identify objects, people,
places, and actions in images.
It uses machine vision technologies with artificial intelligence and trained algorithms
to recognize images through a camera system.
While human and animal brains recognize objects with ease, computers have
difficulty with the task. Software for image recognition requires
deep machine learning. Performance is best on Convolutional Neural Network
(CNN) processors.
PAGE 43
Google, Facebook, Microsoft, and Apple are among the many companies
that are investing significant resources and research into image
recognition and related applications.
PAGE 44
Save Trees
And
Save Power
PAGE 45