0% found this document useful (0 votes)
14 views58 pages

ch03 LSH

Uploaded by

Amrin Mulani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views58 pages

ch03 LSH

Uploaded by

Amrin Mulani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Note to other teachers and users of these slides: We would be delighted if you found this our

material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify
them to fit your own needs. If you make use of a significant portion of these slides in your own
lecture, please include this message, or a link to our web site: http://www.mmds.org

Finding Similar Items:


Locality Sensitive Hashing
Mining of Massive Datasets
Jure Leskovec, Anand Rajaraman, Jeff Ullman
Stanford University
http://www.mmds.org
New thread: High dim. data

High dim. Graph Infinite Machine


Apps
data data data learning

Locality Filtering
PageRank, Recommen
sensitive data SVM
SimRank der systems
hashing streams

Network Web Decision Association


Clustering
Analysis advertising Trees Rules

Dimensional Duplicate
Spam Queries on Perceptron,
ity document
Detection streams kNN
reduction detection

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 2


[Hays and Efros, SIGGRAPH 2007]

Scene Completion Problem

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 3


[Hays and Efros, SIGGRAPH 2007]

Scene Completion Problem

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 4


[Hays and Efros, SIGGRAPH 2007]

Scene Completion Problem

10 nearest neighbors from a collection of 20,000 images


J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 5
[Hays and Efros, SIGGRAPH 2007]

Scene Completion Problem

10 nearest neighbors from a collection of 2 million images


J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6
A Common Metaphor
 Many problems can be expressed as
finding “similar” sets:
 Find near-neighbors in high-dimensional space
 Examples:
 Pages with similar words
 For duplicate detection, classification by topic
 Customers who purchased similar products
 Products with similar customer sets
 Images with similar features
 Users who visited similar websites

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 7


Problem for Today’s Lecture
 Given: High dimensional data points
 For example: Image is a long vector of pixel colors

 And some distance function


 Which quantifies the “distance” between and
 Goal: Find all pairs of data points that are
within some distance threshold
 Note: Naïve solution would take 
where is the number of data points
 MAGIC: This can be done in !! How?
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8
Relation to Previous Lecture
 Last time: Finding frequent pairs
Items 1…N Items 1…K

Items 1…K
Items 1…N

Count of pair {i,j} Count of pair {i,j}


in the data in the data

Naïve solution: A-Priori:


Single pass but requires First pass: Find frequent singletons
space quadratic in the For a pair to be a frequent pair
number of items candidate, its singletons have to be
frequent!
N … number of distinct items Second pass:
K … number of items with support  s Count only candidate pairs!
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9
Relation to Previous Lecture
 Last time: Finding frequent pairs
 Further improvement: PCY
 Pass 1: Items 1…N
 Count exact frequency of each item:
 Take pairs of items {i,j}, hash them into B buckets and
count of the number of pairs that hashed to each bucket:
Buckets 1…B
2 1

Basket 1: {1,2,3}
Pairs: {1,2} {1,3} {2,3}

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 10


Relation to Previous Lecture
 Last time: Finding frequent pairs
 Further improvement: PCY
 Pass 1: Items 1…N
 Count exact frequency of each item:
 Take pairs of items {i,j}, hash them into B buckets and
count of the number of pairs that hashed to each bucket:
 Pass 2: Buckets 1…B
 For a pair {i,j} to be a candidate for 3 1 2

a frequent pair, its singletons {i}, {j} Basket 1: {1,2,3}


have to be frequent and the pair Pairs: {1,2} {1,3} {2,3}
has to hash to a frequent bucket! Basket 2: {1,2,4}
Pairs: {1,2} {1,4} {2,4}

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 11


Relation to Previous Lecture
 Last time: Finding frequent pairs
 Further improvement:
Previous PCY
lecture: A-Priori
Main
 Pass 1:idea: Candidates Items 1…N
Instead of keeping a count of each pair, only keep a count
 Count exactpairs!
of candidate frequency of each item:
 Take pairs lecture:
Today’s of items {i,j},
Find hash them
pairs ofinto B buckets
similar docsand
count of theCandidates
Main idea: number of pairs that hashed to each bucket:
-- Pass
 Pass 2: 1: Take documents and hash them to buckets such that1…B
Buckets
documents that are similar hash to the same bucket 3 1 2
 For a pair {i,j} to be a candidate for
-- Pass 2: Only compare documents that are candidates
a(i.e.,
frequent pair,toits
they hashed singletons
a same bucket)have Basket 1: {1,2,3}
Benefits:
to Insteadand
be frequent of O(N
its2) has
comparisons,
to hash we need O(N)
Pairs: {1,2} {1,3} {2,3}
comparisons
to a frequent tobucket!
find similar documents Basket 2: {1,2,4}
Pairs: {1,2} {1,4} {2,4}

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 12


Finding Similar Items
Distance Measures
 Goal: Find near-neighbors in high-dim. space
 We formally define “near neighbors” as
points that are a “small distance” apart
 For each application, we first need to define what
“distance” means
 Today: Jaccard distance/similarity
 The Jaccard similarity of two sets is the size of their
intersection divided by the size of their union:
sim(C1, C2) = |C1C2|/|C1C2|
 Jaccard distance: d(C1, C2) = 1 - |C1C32|/|C 1C2|
in intersection
8 in union
Jaccard similarity= 3/8
Jaccard distance = 5/8
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 14
Task: Finding Similar Documents
 Goal: Given a large number ( in the millions or
billions) of documents, find “near duplicate” pairs
 Applications:
 Mirror websites, or approximate mirrors
 Don’t want to show both in search results
 Similar news articles at many news sites
 Cluster articles by “same story”
 Problems:
 Many small pieces of one document can appear
out of order in another
 Too many documents to compare all pairs
 Documents are so large or so many that they cannot
fit in main memory
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 15
3 Essential Steps for Similar Docs
1. Shingling: Convert documents to sets
2. Min-Hashing: Convert large sets to short
signatures, while preserving similarity
3. Locality-Sensitive Hashing: Focus on
pairs of signatures likely to be from
similar documents
 Candidate pairs!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 16


The Big Picture

Candidate
pairs:
Locality-
Docu- Min those pairs
Shingling Sensitive
ment Hashing of signatures
Hashing
that we need
to test for
The set Signatures: similarity
of strings short integer
of length k vectors that
that appear represent the
in the doc- sets, and
ument reflect their
similarity

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 17


Docu-
Shingling
ment

The set
of strings
of length k
that appear
in the doc-
ument

Shingling
Step 1: Shingling: Convert documents to sets
Documents as High-Dim Data
 Step 1: Shingling: Convert documents to sets

 Simple approaches:
 Document = set of words appearing in document
 Document = set of “important” words
 Don’t work well for this application. Why?

 Need to account for ordering of words!


 A different way: Shingles!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 19


Define: Shingles
 A k-shingle (or k-gram) for a document is a
sequence of k tokens that appears in the doc
 Tokens can be characters, words or something
else, depending on the application
 Assume tokens = characters for examples

 Example: k=2; document D1 = abcab


Set of 2-shingles: S(D1) = {ab, bc, ca}
 Option: Shingles as a bag (multiset), count ab
twice: S’(D1) = {ab, bc, ca, ab}

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 20


Compressing Shingles
 To compress long shingles, we can hash them
to (say) 4 bytes
 Represent a document by the set of hash
values of its k-shingles
 Idea: Two documents could (rarely) appear to have
shingles in common, when in fact only the hash-
values were shared
 Example: k=2; document D1= abcab
Set of 2-shingles: S(D1) = {ab, bc, ca}
Hash the singles: h(D1) = {1, 5, 7}
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 21
Similarity Metric for Shingles
 Document D1 is a set of its k-shingles C1=S(D1)
 Equivalently, each document is a
0/1 vector in the space of k-shingles
 Each unique shingle is a dimension
 Vectors are very sparse
 A natural similarity measure is the
Jaccard similarity:
sim(D1, D2) = |C1C2|/|C1C2|

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 22


Working Assumption
 Documents that have lots of shingles in
common have similar text, even if the text
appears in different order
 Caveat: You must pick k large enough, or most
documents will have most shingles
 k = 5 is OK for short documents
 k = 10 is better for long documents

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 23


Motivation for Minhash/LSH
 Suppose we need to find near-duplicate
documents among million documents
 Naïvely, we would have to compute pairwise
Jaccard similarities for every pair of docs
 ≈ 5*1011 comparisons
 At 105 secs/day and 106 comparisons/sec,
it would take 5 days

 For million, it takes more than a year…

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 24


Docu- Min-Hash-
Shingling
ment ing

The set Signatures:


of strings short integer
of length k vectors that
that appear represent the
in the doc- sets, and
ument reflect their
similarity

MinHashing
Step 2: Minhashing: Convert large sets to
short signatures, while preserving similarity
Encoding Sets as Bit Vectors
 Many similarity problems can be
formalized as finding subsets that
have significant intersection
 Encode sets using 0/1 (bit, boolean) vectors
 One dimension per element in the universal set
 Interpret set intersection as bitwise AND, and
set union as bitwise OR
 Example: C1 = 10111; C2 = 10011
 Size of intersection = 3; size of union = 4,
 Jaccard similarity (not distance) = 3/4
 Distance: d(C1,C2) = 1 – (Jaccard similarity) = 1/4
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 26
From Sets to Boolean Matrices
 Rows = elements (shingles)
 Columns = sets (documents)
Documents
 1 in row e and column s if and only
1 1 1 0
if e is a member of s
 Column similarity is the Jaccard 1 1 0 1
similarity of the corresponding sets 0 1 0 1

Shingles
(rows with value 1) 0 0 0 1
 Typical matrix is sparse! 1 0 0 1
 Each document is a column:
1 1 1 0
 Example: sim(C1 ,C2) = ?
 Size of intersection = 3; size of union = 6, 1 0 1 0
Jaccard similarity (not distance) = 3/6
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 27
Outline: Finding Similar Columns
 So far:
 Documents  Sets of shingles
 Represent sets as boolean vectors in a matrix
 Next goal: Find similar columns while
computing small signatures
 Similarity of columns == similarity of signatures

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 28


Outline: Finding Similar Columns
 Next Goal: Find similar columns, Small signatures
 Naïve approach:
 1) Signatures of columns: small summaries of columns
 2) Examine pairs of signatures to find similar columns
 Essential: Similarities of signatures and columns are related
 3) Optional: Check that columns with similar signatures
are really similar
 Warnings:
 Comparing all pairs may take too much time: Job for
LSH
 These methods can produce false negatives, and even false
positives (if the optional check is not made)
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 29
Hashing Columns (Signatures)
 Key idea: “hash” each column C to a small
signature h(C), such that:
 (1) h(C) is small enough that the signature fits in RAM
 (2) sim(C1, C2) is the same as the “similarity” of
signatures h(C1) and h(C2)

 Goal: Find a hash function h(·) such that:


 If sim(C1,C2) is high, then with high prob. h(C1) = h(C2)
 If sim(C1,C2) is low, then with high prob. h(C1) ≠ h(C2)

 Hash docs into buckets. Expect that “most” pairs


of near duplicate docs hash into the same bucket!
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 30
Min-Hashing
 Goal: Find a hash function h(·) such that:
 if sim(C1,C2) is high, then with high prob. h(C1) = h(C2)
 if sim(C1,C2) is low, then with high prob. h(C1) ≠ h(C2)

 Clearly, the hash function depends on


the similarity metric:
 Not all similarity metrics have a suitable
hash function
 There is a suitable hash function for
the Jaccard similarity: It is called Min-Hashing
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 31
Min-Hashing
 Imagine the rows of the boolean matrix
permuted under random permutation 
 Define a “hash” function h(C) = the index of
the first (in the permuted order ) row in
which column C has value 1:
h (C) = min (C)
 Use several (e.g., 100) independent hash
functions (that is, permutations) to create a
signature of a column
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 32
Note: Another (equivalent) way is

Min-Hashing Example
to 1 5 1 5
store row indexes:
2 3 1 3
6 4 6 4

2nd element of the permutation


is the first to map to a 1

Permutation  Input matrix (Shingles x Documents)


Signature matrix M

2 4 3 1 0 1 0 2 1 2 1
3 2 4 1 0 0 1 2 1 4 1
7 1 7 0 1 0 1
1 2 1 2
6 3 2 0 1 0 1
1 6 6 0 1 0 1 4th element of the permutation
is the first to map to a 1
5 7 1 1 0 1 0
4 5 5 1 0 1 0
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 33
0 0
The Min-Hash Property 0 0
 Choose a random permutation  1 1
 Claim: Pr[h(C1) = h(C2)] = sim(C1, C2) 0 0
 Why?
0 1
 Let X be a doc (set of shingles), y X is a shingle
1 0
 Then: Pr[(y) = min((X))] = 1/|X|
 It is equally likely that any y X is mapped to the min element
 Let y be s.t. (y) = min((C1C2))
 Then either: (y) = min((C1)) if y  C1 , or One of the two
cols had to have
(y) = min((C2)) if y  C2 1 at position y

 So the prob. that both are true is the prob. y  C1  C2


 Pr[min((C1))=min((C2))]=|C1C2|/|C1C2|= sim(C1, C2)
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 34
Similarity for Signatures
 We know: Pr[h(C1) = h(C2)] = sim(C1, C2)
 Now generalize to multiple hash functions

 The similarity of two signatures is the


fraction of the hash functions in which they
agree
 Note: Because of the Min-Hash property, the
similarity of columns is the same as the
expected similarity of their signatures

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 36


Min-Hashing Example
Permutation  Input matrix (Shingles x Documents)
Signature matrix M

2 4 3 1 0 1 0 2 1 2 1
3 2 4 1 0 0 1 2 1 4 1
7 1 7 0 1 0 1
1 2 1 2
6 3 2 0 1 0 1
1 6 6 0 1 0 1 Similarities:
1-3 2-4 1-2 3-4
5 7 1 1 0 1 0 Col/Col 0.75 0.75 0 0
4 5 5 1 0 1 0 Sig/Sig 0.67 1.00 0 0

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 37


Min-Hash Signatures
 Pick K=100 random permutations of the rows
 Think of sig(C) as a column vector
 sig(C)[i] = according to the i-th permutation, the
index of the first row that has a 1 in column C
sig(C)[i] = min (i(C))
 Note: The sketch (signature) of document C is
small bytes!
 We achieved our goal! We “compressed”
long bit vectors into short signatures
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 38
Implementation Trick
 Permuting rows even once is prohibitive
 Row hashing!
 Pick K = 100 hash functions ki
 Ordering under ki gives a random row permutation!
 One-pass implementation
 For each column C and hash-func. ki keep a “slot” for
the min-hash value
 Initialize all sig(C)[i] =  How to pick a random
hash function h(x)?
 Scan rows looking for 1s Universal hashing:
 Suppose row j has 1 in column C ha,b(x)=((a·x+b) mod p) mod N
where:
 Then for each ki : a,b … random integers
 If ki(j) < sig(C)[i], then sig(C)[i]  ki(j) p … prime number (p > N)
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 39
Candidate
pairs:
Locality-
Docu- Min-Hash- those pairs
Shingling Sensitive
ment ing of signatures
Hashing
that we need
to test for
The set Signatures: similarity
of strings short integer
of length k vectors that
that appear represent the
in the doc- sets, and
ument reflect their
similarity

Locality Sensitive Hashing


Step 3: Locality-Sensitive Hashing:
Focus on pairs of signatures likely to be from
similar documents
2 1 4 1
LSH: First Cut 1 2 1 2
2 1 2 1
 Goal: Find documents with Jaccard similarity at
least s (for some similarity threshold, e.g., s=0.8)
 LSH – General idea: Use a function f(x,y) that
tells whether x and y is a candidate pair: a pair
of elements whose similarity must be evaluated
 For Min-Hash matrices:
 Hash columns of signature matrix M to many buckets
 Each pair of documents that hashes into the
same bucket is a candidate pair
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 41
2 1 4 1
Candidates from Min-Hash 1 2 1 2
2 1 2 1

 Pick a similarity threshold s (0 < s < 1)


 Columns x and y of M are a candidate pair if
their signatures agree on at least fraction s of
their rows:
M (i, x) = M (i, y) for at least frac. s values of i
 We expect documents x and y to have the same
(Jaccard) similarity as their signatures

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 42


2 1 4 1
LSH for Min-Hash 1 2 1 2
2 1 2 1
 Big idea: Hash columns of
signature matrix M several times
 Arrange that (only) similar columns are
likely to hash to the same bucket, with
high probability
 Candidate pairs are those that hash to
the same bucket

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 43


2 1 4 1
Partition M into b Bands 1 2 1 2
2 1 2 1

r rows
per band

b bands

One
signature

Signature matrix M
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 44
Partition M into Bands
 Divide matrix M into b bands of r rows

 For each band, hash its portion of each


column to a hash table with k buckets
 Make k as large as possible

 Candidate column pairs are those that hash


to the same bucket for ≥ 1 band
 Tune b and r to catch most similar pairs,
but few non-similar pairs
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 45
Hashing Bands
Columns 2 and 6
Buckets are probably identical
(candidate pair)

Columns 6 and 7 are


surely different.

Matrix M

r rows b bands

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 46


Simplifying Assumption
 There are enough buckets that columns are
unlikely to hash to the same bucket unless
they are identical in a particular band
 Hereafter, we assume that “same bucket”
means “identical in that band”
 Assumption needed only to simplify analysis,
not for correctness of algorithm

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 47


2 1 4 1
Example of Bands 1 2 1 2
2 1 2 1

Assume the following case:


 Suppose 100,000 columns of M (100k docs)
 Signatures of 100 integers (rows)
 Therefore, signatures take 40Mb
 Choose b = 20 bands of r = 5 integers/band

 Goal: Find pairs of documents that


are at least s = 0.8 similar

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 48


2 1 4 1
C1, C2 are 80% Similar 1 2 1 2
2 1 2 1
 Find pairs of  s=0.8 similarity, set b=20, r=5
 Assume: sim(C1, C2) = 0.8
 Since sim(C1, C2)  s, we want C1, C2 to be a candidate pair:
We want them to hash to at least 1 common bucket (at
least one band is identical)
 Probability C1, C2 identical in one particular
band: (0.8)5 = 0.328
 Probability C1, C2 are not similar in all of the 20 bands:
(1-0.328)20 = 0.00035
 i.e., about 1/3000th of the 80%-similar column pairs
are false negatives (we miss them)
 We would find 99.965% pairs of truly similar documents
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 49
2 1 4 1
C1, C2 are 30% Similar 1 2 1 2
2 1 2 1
 s=0.8 similarity, set b=20, r=5
 Find pairs of
 Assume: sim(C1, C2) = 0.3
 Since sim(C1, C2) < s we want C1, C2 to hash to NO
common buckets (all bands should be different)
 Probability C1, C2 identical in one particular band:
(0.3)5 = 0.00243
 Probability C1, C2 identical in at least 1 of 20 bands: 1
- (1 - 0.00243)20 = 0.0474
 In other words, approximately 4.74% pairs of docs with
similarity 0.3% end up becoming candidate pairs
 They are false positives since we will have to examine them (they
are candidate pairs) but then it will turn out their similarity is
below threshold s
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 50
2 1 4 1
LSH Involves a Tradeoff 1 2 1 2
2 1 2 1
 Pick:
 The number of Min-Hashes (rows of M)
 The number of bands b, and
 The number of rows r per band
to balance false positives/negatives
 Example: If we had only 15 bands of 5
rows, the number of false positives would
go down, but the number of false negatives
would go up
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 51
Analysis of LSH – What We Want

Probability = 1
if t > s

Similarity threshold s
Probability
No chance
of sharing
if t < s
a bucket

Similarity t =sim(C1, C2) of two sets

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 52


What 1 Band of 1 Row Gives You

Probability Remember:
of sharing Probability of
a bucket equal hash-values
= similarity

Similarity t =sim(C1, C2) of two sets

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 53


b bands, r rows/band
 Columns C1 and C2 have similarity t
 Pick any band (r rows)
 Prob. that all rows in band equal = tr
 Prob. that some row in band unequal = 1 - tr

 Prob. that no band identical = (1 - tr)b


 Prob. that at least 1 band identical =
1 - (1 - tr)b

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 54


What b Bands of r Rows Gives You

At least
No bands
one band
identical
identical

Probability s ~ (1/b)1/r 1 - (1 - t r )b
of sharing
a bucket

All rows
Some row of a band
of a band are equal
Similarity t=sim(C1, C2) of two sets unequal

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 55


Example: b = 20; r = 5
 Similarity threshold s
 Prob. that at least 1 band is identical:

s 1-(1-sr)b
.2 .006
.3 .047
.4 .186
.5 .470
.6 .802
.7 .975
.8 .9996
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 56
Picking r and b: The S-curve
 Picking r and b to get the best S-curve
 50 hash-functions (r=5, b=10)
1

0.9
Prob. sharing a bucket

0.8

0.7

0.6

0.5

0.4

0.3

0.2
Blue area: False Negative rate
0.1
Green area: False Positive rate
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Similarity

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 57


LSH Summary
 Tune M, b, r to get almost all pairs with
similar signatures, but eliminate most pairs
that do not have similar signatures
 Check in main memory that candidate pairs
really do have similar signatures
 Optional: In another pass through data,
check that the remaining candidate pairs
really represent similar documents

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 58


Summary: 3 Steps
 Shingling: Convert documents to sets
 We used hashing to assign each shingle an ID
 Min-Hashing: Convert large sets to short
signatures, while preserving similarity
 We used similarity preserving hashing to generate
signatures with property Pr[h(C1) = h(C2)] = sim(C1, C2)
 We used hashing to get around generating random
permutations
 Locality-Sensitive Hashing: Focus on pairs of
signatures likely to be from similar documents
 We used hashing to find candidate pairs of similarity  s
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 59

You might also like