Assign 3
Assign 3
One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking
functions are variants of this simple model.
Tf-idf can be successfully used for stop-words filtering in various subject fields including text summarization and classification.
</font>
How to Compute:
Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a
word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document
Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where
the specific term appears.
TF: Term Frequency, which measures how frequently a term occurs in a document. Since every document is different in length, it is
possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided
by the document length (aka. the total number of terms in the document) as a way of normalization:
$TF(t) = \frac{\text{Number of times term t appears in a document}}{\text{Total number of terms in the document}}.$
IDF: Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally
important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance.
Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:
$IDF(t) = \log_{e}\frac{\text{Total number of documents}} {\text{Number of documents with term t in it}}.$ for numerical
stabiltiy we will be changing this formula little bit $IDF(t) = \log_{e}\frac{\text{Total number of documents}} {\text{Number of
documents with term t in it}+1}.$
Example
Consider a document containing 100 words wherein the word cat appears 3 times. The term frequency (i.e., tf) for cat is then (3 / 100) =
0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document
frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12.
</p> </font>
Task-1
You should compare the results of your own implementation of TFIDF vectorizer with that of sklearns implemenation TFIDF
vectorizer.
Sklearn does few more tweaks in the implementation of its version of TFIDF vectorizer, so to replicate the exact results you would
need to add following things to your custom implementation of tfidf vectorizer:
1. Sklearn has its vocabulary generated from idf sroted in alphabetical order
2. Sklearn formula of idf is different from the standard textbook formula. Here the constant "1" is added to the numerator and
denominator of the idf as if an extra document was seen containing every term in the collection exactly once, which prevents
zero divisions. $IDF(t) = 1+\log_{e}\frac{1\text{ }+\text{ Total number of documents in collection}} {1+\text{Number of
documents with term t in it}}.$
3. Sklearn applies L2-normalization on its output matrix.
4. The final output of sklearn tfidf vectorizer is a sparse matrix.
Note-1: All the necessary outputs of sklearns tfidf vectorizer have been provided as reference in this notebook, you can compare your
outputs as mentioned in the above steps, with these outputs.
Note-2: The output of your custom implementation and that of sklearns implementation would match only with the collection of
document strings provided to you as reference in this notebook. It would not match for strings that contain capital letters or
punctuations, etc, because sklearn version of tfidf vectorizer deals with such strings in a different way. To know further details about how
sklearn tfidf vectorizer works with such string, you can always refer to its official documentation.
Note-3: During this task, it would be helpful for you to debug the code you write with print statements wherever necessary. But when
you are finally submitting the assignment, make sure your code is readable and try not to print things which are not part of this task.
Corpus
In [51]: ## SkLearn# Collection of string documents
corpus = [
'this is the first document',
'this document is the second document',
'and this is the third one',
'is this the first document',
]
SkLearn Implementation
In [52]: from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
vectorizer.fit(corpus)
skl_output = vectorizer.transform(corpus)
In [53]: # sklearn feature names, they are sorted in alphabetic order by default.
print(vectorizer.get_feature_names())
In [54]: # Here we will print the sklearn tfidf vectorizer idf values after applying the fit method
# After using the fit function on the corpus the vocab has 9 words in it, and each has its idf value.
print(vectorizer.idf_)
In [55]: # shape of sklearn tfidf vectorizer output after applying transform method.
skl_output.shape
(4, 9)
Out[55]:
In [56]: # sklearn tfidf values for first line of the above corpus.
# Here the output is a sparse matrix
print(skl_output[0])
(0, 8) 0.38408524091481483
(0, 6) 0.38408524091481483
(0, 3) 0.38408524091481483
(0, 2) 0.5802858236844359
(0, 1) 0.46979138557992045
In [57]: # sklearn tfidf values for first line of the above corpus.
# To understand the output better, here we are converting the sparse output matrix to dense matrix and printing
# Notice that this output is normalized using L2 normalization. sklearn does this by default.
print(skl_output[0].toarray())
(0, 1) 0.4697913855799205
(0, 2) 0.580285823684436
(0, 3) 0.3840852409148149
(0, 6) 0.3840852409148149
(0, 8) 0.3840852409148149
In [64]: print(x[0].toarray())
Task-2
This task is similar to your previous task, just that here your vocabulary is limited to only top 50 features names based on their idf
values. Basically your output will have exactly 50 columns and the number of rows will depend on the number of documents you
have in your corpus.
Here you will be give a pickle file, with file name cleaned_strings. You would have to load the corpus from this file and use it as
input to your tfidf vectorizer.
In [65]: # Below is the code to load the cleaned_strings pickle file provided
# Here corpus is of list type
import pickle
with open('cleaned_strings', 'rb') as f:
corpus2 = pickle.load(f)
In [66]: vocabulary2=(fit(corpus2))
print(vocabulary2)
In [68]: sidf=srtd_50_idf(vocabulary2)
print(sidf)
In [72]: sidf=srtd_50_idf(vocabulary2)
lst=(sidf.keys())
vcb_srtd_50={k:v for v,k in enumerate(lst)}
In [ ]: