0% found this document useful (0 votes)
39 views74 pages

1807 ESIEA ComputerVision-1

The document introduces computer vision and defines it as the automatic extraction, analysis, and understanding of useful information from images or image sequences. It discusses that computer vision is a multidisciplinary field that involves machine learning, optics, neuroscience, psychology, and other areas. The document also outlines some common sub-domains of computer vision including image processing, feature detection, segmentation, reconstruction, recognition, and motion analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views74 pages

1807 ESIEA ComputerVision-1

The document introduces computer vision and defines it as the automatic extraction, analysis, and understanding of useful information from images or image sequences. It discusses that computer vision is a multidisciplinary field that involves machine learning, optics, neuroscience, psychology, and other areas. The document also outlines some common sub-domains of computer vision including image processing, feature detection, segmentation, reconstruction, recognition, and motion analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

DIGITAL SOCIETY SCHOOL

Emma Beauxis-Aussalet

18-07-2019

INTRODUCTION
TO COMPUTER
VISION
INTRODUCTION TO CV

WHAT IS
COMPUTER VISION?

3
HAVE YOU EVER USED IT?

Most probably…

Source: https://cs.brown.edu/courses/csci1430/lectures/2019Spring_01_Introduction.pdf
DEFINITION

“Computer vision is concerned with


the automatic extraction, analysis & understanding of useful information
from a single image or a sequence of images.
It involves the development of a theoretical and algorithmic basis
to achieve automatic visual understanding.”

Attributed to The British Machine Vision Association and Society for Pattern Recognition

5
MULTIDISCIPLINARY

Psychology Machine
Learning

Human Computer Computational


Optics
Understanding CV Representations Photography

Neuroscience Robotics
Camera Scanner Sensor
6
SUB-DOMAINS

‣ Image (pre-)processing deals with the low-level features of images.


‣ Feature detection provides refined representation of images.
‣ Segmentation detects the parts of images.
‣ 3D reconstruction creates 3D models of objects from 2D images.
‣ Object recognition labels what appears in images.
‣ Motion analysis deals with moving objects in videos.
7
INTRODUCTION TO CV

IMAGE
(PRE-)PROCESSING

8
PIXEL (picture element)

9
PIXEL (picture element)

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255 255 255 0 0 0 0 0 0 0 0 255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 255 255 255 0 0 0 0 180 180 180 180 180 180 180 180 0 0 0 0 255 255 255 255 255 255 255 255

255 255 255 255 0 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 0 255 255 255 255 255 255

255 255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255 255 255

255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 0 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 0 255 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 90 0 255 255 255

255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 90 90 0 255 255 255 255

255 255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 90 90 90 90 90 90 0 255 255 255 255

255 255 255 255 0 0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0 255 255 255 255

255 255 255 255 255 255 0 0 0 0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0 255 255 255 255

255 255 255 255 255 255 255 255 255 255 0 0 0 0 0 0 0 0 0 0 0 90 90 90 90 90 0 255 255 255

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 0 0 90 90 90 90 0 0 255

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 0 0 90 90 90 90 0

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 0 0 0 0 255

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255

10
PIXEL (picture element)

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255 255 255 0 0 0 0 0 0 0 0 255 255 255 255 255 255 255 255 255 255 255 255

255 255 255 255 255 255 0 0 0 0 180 180 180 180 180 180 180 180 0 0 0 0 255 255 255 255 255 255 255 255

255 255 255 255 0 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 0 255 255 255 255 255 255

255 255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255 255 255

255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 0 255 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 0 255 255

0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 0 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 0 255 255 255

255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 90 0 255 255 255

255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 90 90 0 255 255 255 255

255 255 255 0 180 180 180 180 180 180 180 180 180 180 180 180 180 90 90 90 90 90 90 90 90 0 255 255 255 255

255 255 255 255 0 0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0 255 255 255 255

255 255 255 255 255 255 0 0 0 0 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0 255 255 255 255

255 255 255 255 255 255 255 255 255 255 0 0 0 0 0 0 0 0 0 0 0 90 90 90 90 90 0 255 255 255

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 0 0 90 90 90 90 0 0 255

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 0 0 90 90 90 90 0

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 0 0 0 0 255

255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255

11
PIXEL (picture element)

3 colors…

12
PIXEL (picture element)

3 colors… …3 channels

Source: Reinhard Klette, Concise Computer Vision, Springer

13
(PRE-)PROCESSING

Image pre-processing is the direct


manipulation of pixel values.
A variety of operations are possible:

‣ BRIGHTNESS, CONTRAST

‣ HISTOGRAM EQUALISATION

‣ COLOR NORMALIZATION

‣ FILTERING

14
Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf
BRIGHTNESS, CONTRAST

Add X to
all pixel values… …to increase
.brightness.

Multiply X with
all pixel values… …to increase
.contrast.

Source: http://mccormickml.com/2013/05/09/hog-person-detector-tutorial/ 15
BRIGHTNESS, CONTRAST
# pixel

pixel values more


(0 to 255)
brightness

less
brightness

Source: Reinhard Klette, Concise Computer Vision, Springer 16


HISTOGRAM EQUALIZATION

Source: Reinhard Klette, Concise Computer Vision, Springer 17


HISTOGRAM EQUALIZATION

Source: https://en.wikipedia.org/wiki/Histogram_equalization 18
COLOR NORMALISATION

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 19
FILTERING

Linear filters combine each pixel with its neighbours.

Source: https://cs.brown.edu/courses/csci1430/lectures/2019Spring_01_Introduction.pdf
20
FILTERING

21
FILTERING

180 90 0

180 90 0

180 90 0

0 x 180 + 0 x 90 + 0 x 0
+ 0 x 180 + 0 x 90 + 1 x 0
+ 0 x 180 + 0 x 90 + 0 x 0
=0

22
FILTERING

180 90 0

180 0 0

180 90 0

0 x 180 + 0 x 90 + 0 x 0
+ 0 x 180 + 0 x 90 + 1 x 0
+ 0 x 180 + 0 x 90 + 0 x 0
=0

23
FILTERING

90 0 255

90 0 255

90 0 255

0 x 90 + 0 x 0 + 0 x 255
+ 0 x 90 + 0 x 0 + 1 x 255
+ 0 x 90 + 0 x 0 + 0 x 255
= 255

24
FILTERING

90 0 255

90 255 255

90 0 255

0 x 90 + 0 x 0 + 0 x 255
+ 0 x 90 + 0 x 0 + 1 x 255
+ 0 x 90 + 0 x 0 + 0 x 255
= 255

25
FILTERING

180 90 0 255

180 0 255 255

180 90 0 255

26
FILTERING

What filter would leave


the image unchanged?

27
FILTERING

Source: https://cs.brown.edu/courses/csci1430/lectures/2019Spring_01_Introduction.pdf
28
FILTERING

Source: https://cs.brown.edu/courses/csci1430/lectures/2019Spring_01_Introduction.pdf
29
FILTERING

Source: https://en.wikipedia.org/wiki/Kernel_%28image_processing%29
30
INTRODUCTION TO CV

FEATURE DETECTION
INTEREST POINTS

Interest points include edges, corners, blobs, patches, ridges, textures.

Primarily to
match images

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 32
EDGE

Edge detection identify points where brightness changes sharply.


(formally called discontinuities)

Source: https://en.wikipedia.org/wiki/Edge_detection
33
EDGE
easy

more
ambiguous
34
DOG (DIFFERENCE OF GAUSSIAN)

180 90 0

180 90 0

180 90 0

35
DOG (DIFFERENCE OF GAUSSIAN)

180 180 0 255 255

180 180 90 0 255

180 180 90 0 255

180 180 90 0 255

180 90 90 0 255

36
DOG (DIFFERENCE OF GAUSSIAN)

1/16 * ( 1 * 180 + 2 * 90 + 1 * 0
+ 2 * 180 + 4 * 90 + 2 * 0 180 180 0 255 255

+ 1 * 190 + 2 * 90 + 1 * 0 ) 180 180 90 0 255

- 180 180 90 0 255

180 180 90 0 255


1/256 * ( 1 * 180 + 4 * 180 + 6 * 0 + 4 * 255 + 1 * 255
180 90 90 0 255
+ 4 * 180 + 16 * 180 + 24 * 90 + 16 * 0 + 4 * 255
+ 6 * 180 + 24 * 180 + 36 * 90 + 24 * 0 + 6 * 255
+ 4 * 180 + 16 * 180 + 24 * 90 + 26 * 0 + 4 * 255
+ 1 * 180 + 4 * 90 + 6 * 90 + 4* 0 + 1 * 255 )

37
DOG (DIFFERENCE OF GAUSSIAN)

Source: https://en.wikipedia.org/wiki/Difference_of_Gaussians
38
HOG (HISTOGRAM OF ORIENTED GRADIENTS)

Compute the gradient vector of each pixel…

Source: http://mccormickml.com/2013/05/07/gradient-vectors/ 39
HOG (HISTOGRAM OF ORIENTED GRADIENTS)

Compute the gradient vector of each pixel…


and the gradient histograms of patches.

Source: https://www.learnopencv.com/histogram-of-oriented-gradients/ 40
HOG (HISTOGRAM OF ORIENTED GRADIENTS)

Compute the gradient vector of each pixel…


and the gradient histograms of patches.

Source: https://www.learnopencv.com/histogram-of-oriented-gradients/ 41
HOG (HISTOGRAM OF ORIENTED GRADIENTS)

Compute the gradient vector of each pixel…


and the gradient histograms of patches.

Source: https://www.learnopencv.com/histogram-of-oriented-gradients/ 42
HOG (HISTOGRAM OF ORIENTED GRADIENTS)

Compute the gradient vector of each pixel…


and the gradient histograms of patches.

43
HOG (HISTOGRAM OF ORIENTED GRADIENTS)

Compute the gradient vector of each pixel…


and the gradient histograms of patches.

Source: Reinhard Klette, Concise Computer Vision, Springer 44


SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

Scale-Invariant means that SIFT description of interest points


do not change with:

‣ Scale

‣ Rotation

‣ Illumination

‣ Viewpoint (affine distortions)

45
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/

46
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

The algorithms is quite elaborate, and combines different techniques.

Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
47
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

First it uses DoG with different scales and blurs…

Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
48
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

Then it finds maxima & minima in DoG results across scale…

Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
49
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

Then it finds maxima & minima in DoG results across scale…

…and elicits the most interesting keypoints.

Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
50
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)
Find the bin with max #pixels,
select all bins with #pixels > 80% max #pixels.

Then it computes HOG…


…and identify interest points for each peak orientation.
Peak orientations are used to compare rotated images.
Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
51
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

Finally, HOGs are encoded around the keypoints.

+ Subtract key orientation


(for orientation invariance)

+ Normalize gradient magnitude


(for illumination invariance)

Source: http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
52
SIFT (SCALE-INVARIANT FEATURE TRANSFORM)

Source: Tony Lindeberg. Image Matching Using Generalized Scale-Space Interest Points. https://people.kth.se/~tony/papers/Lin15-JMIV.pdf 53
INTRODUCTION TO CV

SEGMENTATION
WHAT IS SEGMENTATION?

Segmentation is finding consistent regions in an image.

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 55
SEGMENTATION TECHNIQUES

Segmentation techniques use statistics on pixel distribution within regions.

Source: https://www.slideshare.net/lalitxp/image-texture-analysis?next_slideshow=1 56
WITHOUT CLASSIFICATION

Many techniques are based on comparing adjacent regions.

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 57
UNSUPERVISED CLASSIFICATION
Clustering & neural networks* can be used to group similar patches.
* e.g., self-organizing maps (SOM) or constraint satisfaction neural networks (CSNN)

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 58
UNSUPERVISED CLASSIFICATION
Clustering & neural networks* can be used to group similar patches.
* e.g., self-organizing maps (SOM) or constraint satisfaction neural networks (CSNN)

Source: https://www.slideshare.net/lalitxp/image-texture-analysis?next_slideshow=1 59
UNSUPERVISED CLASSIFICATION
Clustering & neural networks* can be used to group similar patches.
* e.g., self-organizing maps (SOM) or constraint satisfaction neural networks (CSNN)

Source: https://www.slideshare.net/lalitxp/image-texture-analysis?next_slideshow=1 60
SUPERVISED CLASSIFICATION

Many classification techniques can be applied to labelled data.

Classification
Model

Source: http://people.ee.ethz.ch/~cattin/MIA-ETH/pdf/MIA-08-SupervisedSegmentation.pdf 61
PROBABILISTIC SEGMENTATION

Also called soft segmentation, it assigns pixels a probability of belonging


to a segment.

Good for hair


and fuzzy patches

Algorithms can be based on


K-mean clustering
or Gaussian Mixture Models…

Source: https://www.slideshare.net/lalitxp/image-texture-analysis?next_slideshow=1 62
RESOURCES
Title of Resource:
Resource Type: Website
Description: A definition and discussion of the principles of grouping.
Title of Resource:
Resource Type: Video (4:26)
Description: A Khan Academy lecture on bottom-up versus top-down processing.
Title of Resource: Thresholding
Resource Type: Website
Description: Thresholding is the simplest method of image segmentation. From a grayscale image, thresholding can be used to create binary images.
Title of Resource: Otsu Thresholding
Resource Type: Website
Description: Converting a greyscale image to monochrome is a common image processing task. Otsu's method, named after its inventor Nobuyuki Otsu, is one of many binarization algorithms. This page describes how the
algorithm works and provides a Java implementation, which can be easily ported to other languages.
Title of Resource:
Resource Type: PDF
Description: This paper reviews the main approaches of partitioning an image into regions by using gray values in order to reach a correct interpretation of the image.
Title of Resource: Cluster Analysis
Resource Type: Website
Description: The definition of cluster analysis.
Title of Resource:
Resource Type: Video (1:21)
Description: The video shows my K-Means Clustering algorithm running on an image, iterating from K=1 to K=80 clusters, with the last 3 frames being the original image.
Title of Resource:
Resource Type: Video (2:58)
Description: This video accompanies the article "Semantic Soft Segmentation" by Yağız Aksoy, Tae-Hyun Oh, Sylvain Paris, Marc Pollefeys and Wojciech Matusik. The article and additional resources are available on the
project webpage: https://yaksoy.github.io/sss/.
Title of Resource:
Resource Type: Website
Description: How graph cuts are applied in the field of computer vision.
Title of Resource:
Resource Type: Video (2:11)
Description: This video is part of the Udacity course "Introduction to Computer Vision"

63
INTRODUCTION TO CV

OBJECT RECOGNITION
CLASSIFICATION TASKS

65
TYPICAL PIPELINE

From segmentation or object detection


(classifying segments or pixels as background or not)

Source: https://www.learnopencv.com/image-recognition-and-object-detection-part1/ 66
TYPICAL PIPELINE

This pipeline is also used …but these can be


for segmentation or object detection unsupervised classification
Source: https://www.learnopencv.com/image-recognition-and-object-detection-part1/ 67
DIFFICULTIES

‣ Occlusions can hide key parts of objects.


‣ Viewpoints may not show all key parts of objects.
‣ Illumination can greatly change object appearances (e.g., shadows).
‣ Shape-shifting can be expected of many objects.
(e.g., flexible bodies)

‣ Intra-class variations can be considerable.


(e.g., chairs can be very different from one another)
68
TECHNIQUES

‣ Neural Networks & Deep Learning are the most common technique.
‣ Support Vector Machines (SVM) learn boundaries between classes
in the chosen feature space.
‣ Boosting combines weak (extremely simple) classifiers.

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 69
TECHNIQUES

‣ Neural Networks & Deep Learning are the most common technique.
‣ Support Vector Machines (SVM) learn boundaries between classes
in the chosen feature space.
‣ Boosting combines weak (extremely simple) classifiers.
‣ Bag-of-Word learns typical parts of objects and their spatial distribution.
‣ Active Appearance Models specify typical key points, their spatial
distribution, and their variability.

Source: http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf
70
RESOURCES

Title of Resource:
Resource Type: Video
Description: Find meaning in visual data on Watson Studio! Analyze images for scenes, objects, faces, and other custom content. Take advantage of pretrained models, or create your
own custom classifier. Develop smart applications that analyze the visual content of images or video frames to understand what is happening in a scene.Then, deploy your model for
use in applications.
Title of Resource:
Resource Type: Website Article
Description: Compares and contrasts Computer Vision and Visual Recognition to clearly explain their differences.
Title of Resource: Hough Transform
Resource Type: Website
Description: Defines a Hough Transform.
Title of Resource:
Resource Type: Video
Description: This video explains how the Hough Transform works to detect lines in images. First, apply an edge detection algorithm to the input image, and then compute the Hough
Transform to find the combination of Rho and Theta values in which there are more occurrences of lines.
Title of Resource:
Resource Type: Journal Article (PDF)
Description: This paper introduces a new scene-centric database called Places with over 7 million labeled pictures of scenes. It then proposes new methods to compare the density
and diversity of image datasets and shows that Places is as dense as other scene datasets and has more diversity.
Title of Resource:
Resource Type: Website
Description: Defines k-nearest neighbors algorithm.

71
DIGITAL SOCIETY SCHOOL

Emma Beauxis-Aussalet

18-07-2019

QUESTIONS /
DISCUSSIONS
YOU
TRANSFORM
SOCIETY
BY DESIGN
DIGITAL SOCIETY SCHOOL
WHERE CHANGE TAKES
SHAPE

You might also like