0% found this document useful (0 votes)
76 views57 pages

CSE 185 Introduction To Computer Vision: Local Invariant Features

This document discusses local invariant features for computer vision. It covers interest points, descriptors, and correspondence across views. Key topics include: - Local invariant features allow image content to be transformed into feature coordinates that are invariant to transformations like translation, rotation, and scale. - Good interest points for feature extraction should be repeatable, salient, efficient to compute, and local. Corner detection algorithms aim to find points that change significantly under small shifts. - The Harris corner detector improves on the Moravec detector by considering shifts at all angles, using a continuous window function, and analyzing the full quadratic form of intensity changes rather than just the minimum.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views57 pages

CSE 185 Introduction To Computer Vision: Local Invariant Features

This document discusses local invariant features for computer vision. It covers interest points, descriptors, and correspondence across views. Key topics include: - Local invariant features allow image content to be transformed into feature coordinates that are invariant to transformations like translation, rotation, and scale. - Good interest points for feature extraction should be repeatable, salient, efficient to compute, and local. Corner detection algorithms aim to find points that change significantly under small shifts. - The Harris corner detector improves on the Moravec detector by considering shifts at all angles, using a continuous window function, and analyzing the full quadratic form of intensity changes rather than just the minimum.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 57

CSE 185

Introduction to Computer Vision


Local Invariant Features
Local features
• Interest points
• Descriptors

• Reading: Chapter 4
Correspondence across views
• Correspondence: matching points, patches,
edges, or regions across images


Example: estimating fundamental
matrix that corresponds two views
Example: structure from motion
Applications
• Feature points are used for:
– Image alignment
– 3D reconstruction
– Motion tracking
– Robot navigation
– Indexing and database retrieval
– Object recognition
Interest points
• Note: “interest points” = “keypoints”, also
sometimes called “features”
• Many applications
– tracking: which points are good to track?
– recognition: find patches likely to tell us
something about object category
– 3D reconstruction: find correspondences
across different views
Interest points original

• Suppose you have to


click on some point,
go away and come
back after I deform the
image, and click on the
same points again.
– Which points would
you choose?

deformed
Keypoint matching 1. Find a set of
distinctive key-
points
A1 B3 2. Define a region
around each
keypoint
A2 A3 3. Extract and
B2 normalize the
B1 region content
4. Compute a local
fA fB descriptor from the
normalized region
5. Match local
descriptors
d ( f A, fB )  T
Goals for keypoints

Detect points that are repeatable and distinctive


Key trade-offs A1 B3

A2 A3
B2
B1
Detection of interest points

More Repeatable More Points


Robust detection Robust to occlusion
Precise localization Works with less texture

Description of patches

More Distinctive More Flexible


Minimize wrong matches Robust to expected variations
Maximize correct matches
Invariant local features
• Image content is transformed into local feature coordinates that are
invariant to translation, rotation, scale, and other imaging parameters

Features Descriptors
Choosing interest points
Where would you tell your friend to meet you?
Choosing interest points
Where would you tell your friend to meet you?
Feature extraction: Corners
Why extract features?
• Motivation: panorama stitching
– We have two images – how do we combine them?
Local features: main components
1) Detection: Identify the interest
points

2) Description: Extract vector


feature descriptor surrounding x1  [ x1(1) ,  , xd(1) ]
each interest point.

3) Matching: Determine
correspondence between x 2  [ x1( 2) , , xd( 2) ]
descriptors in two views

17
Characteristics of good features

• Repeatability
– The same feature can be found in several images despite geometric and photometric
transformations
• Saliency
– Each feature is distinctive
• Compactness and efficiency
– Fewer features than image pixels
• Locality
– A feature occupies a relatively small area of the image; robust to clutter and occlusion
Interest operator repeatability
• We want to detect (at least some of) the
same points in both images

No chance to find true matches!

• Yet we have to be able to run the detection


procedure independently per image.
Descriptor distinctiveness
• We want to be able to reliably determine
which point goes with which.

?
• Must provide some invariance to geometric
and photometric differences between the two
views.
Local features: main components
1) Detection: Identify the interest points

2) Description:Extract vector feature descriptor surrounding


each interest point.

3) Matching: Determine correspondence between descriptors


in two views
Many detectors
Hessian & Harris [Beaudet ‘78], [Harris ‘88]
Laplacian, DoG [Lindeberg ‘98], [Lowe 1999]
Harris-/Hessian-Laplace [Mikolajczyk & Schmid ‘01]
Harris-/Hessian-Affine [Mikolajczyk & Schmid ‘04]
EBR and IBR [Tuytelaars & Van Gool ‘04]
MSER [Matas ‘02]
Salient Regions [Kadir & Brady ‘01]
Others…
Corner detection
• We should easily recognize the point by looking through
a small window
• Shifting a window in any direction should give a large
change in intensity

“flat” region: “edge”: “corner”:


no change in all no change along the significant change in
directions edge direction all directions
Finding corners

• Key property: in the region around a corner, image


gradient has two or more dominant directions
• Corners are repeatable and distinctive
Corner detection: Mathematics
Change in appearance of window w(x,y)
for the shift [u,v]:

E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

I(x, y)
E(u, v)

E(3,2)

w(x, y)
Corner detection: Mathematics
Change in appearance of window w(x,y)
for the shift [u,v]:

E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

I(x, y)
E(u, v)

E(0,0)

w(x, y)
Corner detection: Mathematics
Change in appearance of window w(x,y)
for the shift [u,v]:

E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

Window Shifted Intensity


function intensity

Window function w(x,y) = or

1 in window, 0 outside Gaussian


Moravec corner detector
Test each pixel with change of intensity for the shift [u,v]:

E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

Window Shifted Intensity


function intensity

Four shifts: (u,v) = (1,0), (1,1), (0,1), (-1, 1)


Look for local maxima in min{E}
When does this idea fail?
Moravec corner detector
• In a region of uniform intensity,
then the nearby patches will look
similar.
• On an edge, then nearby patches in
a direction perpendicular to the
edge will look quite different, but
“flat” region: “edge”: “corner”:
nearby patches in a direction no change in all no change along the significant change in all
directions edge direction directions
parallel to the edge will result in
only a small change. E (u, v)   w( x, y )  I ( x  u , y  v)  I ( x, y ) 
2

• On a feature with variation in all x, y

directions, then none of the nearby Four shifts: (u,v) = (1,0), (1,1), (0,1), (-1, 1)
patches will look similar. Look for local maxima in min{E}
Problem of Moravec detector
• Only a set of shifts at every 45 degree is
considered
• Noisy response due to a binary window
function
• Only minimum of E is taken into account
Harris corner detector (1988) solves these
problems.
C. Harris and M. Stephens. "A Combined Corner and Edge Detector.“
Proceedings of the 4th Alvey Vision Conference: pages 147--151. 
Corner detection: Mathematics
Change in appearance of window w(x,y)
for the shift [u,v]:

E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

We want to find out how this function behaves for


small shifts
E(u, v)
Corner detection: Mathematics
Change in appearance of window w(x,y)
for the shift [u,v]:

E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

We want to find out how this function behaves for


small shifts
Local quadratic approximation of E(u,v) in the
neighborhood of (0,0) is given by the second-order
Taylor expansion:
 Eu (0,0) 1  Euu (0,0) Euv (0,0) u 
E (u, v)  E (0,0)  [u v]   [u v]  v 
E
 v ( 0, 0)  2 E
 uv ( 0, 0) Evv ( 0,0)  
Taylor series
• Taylor series of the function f at a
¥
f (n ) (a)
f (x) =å (x - a)n
n=0 n!
f ¢(a) f ¢¢(a) 2 f ¢¢¢(a)
= f (a) + (x - a) + (x - a) + (x - a)3 +
1! 2! 3!

• Maclaurin series (Taylor series when a=0)


Derivation of E(u,v)
E (u , v)   w( x, y )  I ( x  u , y  v )  I ( x, y ) 
2

x, y

f (x + u, y + v) = f (x, y) + uf x (x, y) + vf y (x, y)


I(x + u, y + v) =I(x, y) + uI x (x, y) + vI y (x, y)
2
å ( I(x + u, y + v) - I(x, y))
» å (I(x, y) +uI (x, y) + vI (x, y) - I(x, y))
x y
2

=å u I + 2uvI I + v I
2 2
x x y
2 2
y

é 2 ù
Ix I x I y úé u ù
=å ë u v û
é ùê ê ú
êI I ú
Iy ë v û
2
ë x y û
Corner detection: Mathematics
The quadratic approximation simplifies to
u 
E (u, v)  [u v] M  
v 
where M is a second moment matrix computed from image
derivatives:

 I x2 Ix I y 
M   w( x, y )  2 
x, y  I x I y I y 

M
Corners as distinctive Interest
Points
I x I x IxI y 
M   w( x, y )  
I x I y IyIy 
2 x 2 matrix of image derivatives (averaged in
neighborhood of a point).

I I I I
Notation: Ix  Iy  IxI y 
x y x y
Interpreting the second moment
matrix
The surface E(u,v) is locally approximated by a
quadratic form. Let’s try to understand its shape.

u 
E (u, v)  [u v] M  
v 

 I x2 IxI y 
M   w( x, y )  2 
x, y  I x I y I y 
Interpreting the second moment
matrix
First, consider the axis-aligned case
(gradients are either horizontal or vertical)

 I x2 IxI y  
1  1
0
M   w( x, y )  2 
M R   R
x, y  I x I y I y  0 2 
Diagonalization of M
Here, λ1 and λ2 are eigenvalues of M
If either λ is close to 0, then this is not a corner, so
look for locations where both are large.
Interpreting the second moment
matrix u 
Consider a horizontal “slice” of E(u, v): [u v] M    const
v 
This is the equation of an ellipse.
Interpreting the second moment
matrix u 
Consider a horizontal “slice” of E(u, v): [u v] M    const
v 
This is the equation of an ellipse.
1 0 
1
Diagonalization of M: M R   R
 0 2 
The axis lengths of the ellipse are determined by the
eigenvalues and the orientation is determined by R
direction of the
fastest change
direction of the
slowest
change
(max)-1/2
(min)-1/2
Visualization of second moment
matrices
Visualization of second moment
matrices
Interpreting the eigenvalues
Classification of image points using eigenvalues
of M:
“Edge”
2 >> 1 “Corner”
1 and 2 are large,
1 ~ 2;
2
E increases in all
directions

1 and 2 are small;


E is almost constant in “Flat” “Edge”
all directions
region 1 >> 2

1
Corner response function
R  det( M )   trace( M ) 2  12   (1  2 ) 2
α: constant (0.04 to 0.06)
“Edge”
R<0 “Corner”
R>0

|R| small
“Flat” “Edge”
region R<0
Harris corner detector

1) Compute M matrix for each image window to get


their cornerness scores.
2) Find points whose surrounding window gave large
corner response (f>threshold)
3) Take the points of local maxima, i.e., perform non-
maximum suppression
Harris corner detector [Harris88]
• Second moment matrix

 I x2 ( D ) I x I y ( D ) 1. Image Ix Iy
 ( I ,  D )  g ( I )    derivatives
I I
 x y D( ) I 2
y ( )
D   (optionally, blur first)
Ix 2 Iy2 IxIy
2. Square of
derivatives
det M  12
trace M  1  2
3. Gaussian g(Ix2) g(Iy2) g(IxIy)
filter g(I)

4. Cornerness function – both eigenvalues are strong


har  det[ ( I , D)]   [trace( ( I , D)) 2 ] 
g ( I x2 ) g ( I y2 )  [ g ( I x I y )]2   [ g ( I x2 )  g ( I y2 )]2

5. Non-maxima suppression 49
har
Harris Detector: Steps
Harris Detector: Steps
Compute corner response R
Harris Detector: Steps
Find points with large corner response: R>threshold
Harris Detector: Steps
Take only the points of local maxima of R
Harris Detector: Steps
Invariance and covariance
• We want corner locations to be invariant to photometric transformations and covariant to
geometric transformations
– Invariance: image is transformed and corner locations do not change
– Covariance: if we have two transformed versions of the same image, features
should be detected in corresponding locations
Affine intensity change
IaI+b

• Only derivatives are used =>


invariance to intensity shift I  I + b
• Intensity scaling: I  a I

R R
threshold

x (image coordinate) x (image coordinate)


Partially invariant to affine intensity change
Image translation

• Derivatives and window function are shift-invariant

Corner location is covariant w.r.t. translation


Image rotation

Second moment ellipse rotates but its shape


(i.e. eigenvalues) remains the same

Corner location is covariant w.r.t. rotation


Scaling

Corner

All points will


be classified
as edges
Corner location is not covariant to scaling!
More local features
• SIFT (Scale Invariant Feature Transform)
• Harris Laplace
• Harris Affine
• Hessian detector
• Hessian Laplace
• Hessian Affine
• MSER (Maximally Stable Extremal Regions)
• SURF (Speeded-Up Robust Feature)
• BRIEF (Boundary Robust Independent Elementary Features)
• ORB (Oriented BRIEF)

You might also like