0% found this document useful (0 votes)
51 views

Assignment 3 Instructions

This document provides instructions for Assignment Task 1, which involves building a TF-IDF vectorizer from scratch and comparing it to Sklearn's TF-IDF vectorizer. The key steps are: 1. Implementing fit and transform methods for a custom TF-IDF vectorizer. 2. Comparing vocabulary, IDF values, and outputs to Sklearn's TF-IDF vectorizer, which makes tweaks like adding 1 to IDF calculations and normalizing outputs. 3. Printing outputs of the custom and Sklearn vectorizers on documents and checking for similarities after implementing techniques like Sklearn. The goal is to replicate Sklearn's TF-IDF vectorizer results for the given document corpus by addressing differences in their implementations. Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Assignment 3 Instructions

This document provides instructions for Assignment Task 1, which involves building a TF-IDF vectorizer from scratch and comparing it to Sklearn's TF-IDF vectorizer. The key steps are: 1. Implementing fit and transform methods for a custom TF-IDF vectorizer. 2. Comparing vocabulary, IDF values, and outputs to Sklearn's TF-IDF vectorizer, which makes tweaks like adding 1 to IDF calculations and normalizing outputs. 3. Printing outputs of the custom and Sklearn vectorizers on documents and checking for similarities after implementing techniques like Sklearn. The goal is to replicate Sklearn's TF-IDF vectorizer results for the given document corpus by addressing differences in their implementations. Notes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

10/31/22, 12:54 AM Assignment_3_Instructions

Assignment
What does tf-idf mean?
Tf-idf stands for term frequency-inverse document frequency, and the tf-idf weight is a weight
often used in information retrieval and text mining. This weight is a statistical measure used to
evaluate how important a word is to a document in a collection or corpus. The importance
increases proportionally to the number of times a word appears in the document but is offset by
the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used
by search engines as a central tool in scoring and ranking a document's relevance given a user
query.

One of the simplest ranking functions is computed by summing the tf-idf for each query term;
many more sophisticated ranking functions are variants of this simple model.

Tf-idf can be successfully used for stop-words filtering in various subject fields including text
summarization and classification.

</font>

How to Compute:

Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term
Frequency (TF), aka. the number of times a word appears in a document, divided by the total
number of words in that document; the second term is the Inverse Document Frequency (IDF),
computed as the logarithm of the number of the documents in the corpus divided by the number
of documents where the specific term appears.

TF: Term Frequency, which measures how frequently a term occurs in a document. Since
every document is different in length, it is possible that a term would appear much more
times in long documents than shorter ones. Thus, the term frequency is often divided by the
document length (aka. the total number of terms in the document) as a way of
normalization:
Number of times term t appears in a document
T F (t) = .
Total number of terms in the document

IDF: Inverse Document Frequency, which measures how important a term is. While
computing TF, all terms are considered equally important. However it is known that certain
terms, such as "is", "of", and "that", may appear a lot of times but have little importance.
Thus we need to weigh down the frequent terms while scale up the rare ones, by computing
the following:
Total number of documents
I DF (t) = log
e
Number of documents with term t in it
. for numerical stabiltiy we will be changing
this formula little bit I DF (t) = log
e
Total number of documents

Number of documents with term t in it+1


.

Example

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 1/10


10/31/22, 12:54 AM Assignment_3_Instructions

Consider a document containing 100 words wherein the word cat appears 3 times. The term
frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents
and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e.,
idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these
quantities: 0.03 * 4 = 0.12. </p> </font>

Task-1
1. Build a TFIDF Vectorizer & compare its results with Sklearn:

As a part of this task you will be implementing TFIDF vectorizer on a collection of text
documents.

You should compare the results of your own implementation of TFIDF vectorizer with that
of sklearns implemenation TFIDF vectorizer.

Sklearn does few more tweaks in the implementation of its version of TFIDF vectorizer, so
to replicate the exact results you would need to add following things to your custom
implementation of tfidf vectorizer:
1. Sklearn has its vocabulary generated from idf sroted in alphabetical order
2. Sklearn formula of idf is different from the standard textbook formula. Here the
constant "1" is added to the numerator and denominator of the idf as if an extra
document was seen containing every term in the collection exactly once, which
1 + Total number of documents in collection
prevents zero divisions. I DF (t) = 1 + log
e
1+Number of documents with term t in it
.

3. Sklearn applies L2-normalization on its output matrix.


4. The final output of sklearn tfidf vectorizer is a sparse matrix.

Steps to approach this task:


1. You would have to write both fit and transform methods for your custom
implementation of tfidf vectorizer.
2. Print out the alphabetically sorted voacb after you fit your data and check if its the
same as that of the feature names from sklearn tfidf vectorizer.
3. Print out the idf values from your implementation and check if its the same as that of
sklearns tfidf vectorizer idf values.
4. Once you get your voacb and idf values to be same as that of sklearns implementation
of tfidf vectorizer, proceed to the below steps.
5. Make sure the output of your implementation is a sparse matrix. Before generating the
final output, you need to normalize your sparse matrix using L2 normalization. You can
refer to this link https://scikit-
learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
6. After completing the above steps, print the output of your custom implementation and
compare it with sklearns implementation of tfidf vectorizer.
7. To check the output of a single document in your collection of documents, you can
convert the sparse matrix related only to that document into dense matrix and print it.

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 2/10


10/31/22, 12:54 AM Assignment_3_Instructions

Note-1: All the necessary outputs of sklearns tfidf vectorizer have been provided as reference in
this notebook, you can compare your outputs as mentioned in the above steps, with these
outputs.
Note-2: The output of your custom implementation and that of sklearns implementation would
match only with the collection of document strings provided to you as reference in this
notebook. It would not match for strings that contain capital letters or punctuations, etc,
because sklearn version of tfidf vectorizer deals with such strings in a different way. To know
further details about how sklearn tfidf vectorizer works with such string, you can always refer to
its official documentation.
Note-3: During this task, it would be helpful for you to debug the code you write with print
statements wherever necessary. But when you are finally submitting the assignment, make sure
your code is readable and try not to print things which are not part of this task.

Corpus
In [ ]:
## SkLearn# Collection of string documents

corpus = [
'this is the first document',
'this document is the second document',
'and this is the third one',
'is this the first document',
]

SkLearn Implementation
In [ ]:
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
vectorizer.fit(corpus)
skl_output = vectorizer.transform(corpus)

In [ ]:
# sklearn feature names, they are sorted in alphabetic order by default.

print(vectorizer.get_feature_names())

['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']

In [ ]:
# Here we will print the sklearn tfidf vectorizer idf values after applying the fit
# After using the fit function on the corpus the vocab has 9 words in it, and each h

print(vectorizer.idf_)

[1.91629073 1.22314355 1.51082562 1. 1.91629073 1.91629073


1. 1.91629073 1. ]

In [ ]:
# shape of sklearn tfidf vectorizer output after applying transform method.

skl_output.shape

Out[ ]: (4, 9)

In [ ]:
# sklearn tfidf values for first line of the above corpus.

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 3/10


10/31/22, 12:54 AM Assignment_3_Instructions

# Here the output is a sparse matrix

print(skl_output[0])

(0, 8) 0.38408524091481483
(0, 6) 0.38408524091481483
(0, 3) 0.38408524091481483
(0, 2) 0.5802858236844359
(0, 1) 0.46979138557992045

In [ ]:
# sklearn tfidf values for first line of the above corpus.
# To understand the output better, here we are converting the sparse output matrix t
# Notice that this output is normalized using L2 normalization. sklearn does this by

print(skl_output[0].toarray())

[[0. 0.46979139 0.58028582 0.38408524 0. 0.


0.38408524 0. 0.38408524]]

Your custom implementation


In [ ]:
# Write your code here.
# Make sure its well documented and readble with appropriate comments.
# Compare your results with the above sklearn tfidf vectorizer
# You are not supposed to use any other library apart from the ones given below

from collections import Counter


from tqdm import tqdm
from scipy.sparse import csr_matrix
import math
import operator
from sklearn.preprocessing import normalize
import numpy
import pandas as pd

In [ ]:
corpus = [
'this is the first document',
'this document is the second document',
'and this is the third one',
'is this the first document',
]

In [ ]:
def fit(data1):
unq_words = set()
if isinstance(data1, (list)):
for row in data1:
for wrd in row.split(" "):
if len(wrd) < 2:
continue
unq_words.add(wrd)
unq_words = sorted(list(unq_words))
worde = list(enumerate(unq_words))
vocab_dict = {}
for i in range(len(worde)):
vocab_dict[worde[i][1]] = worde[i][0]
return vocab_dict
else:
print("pass list of sentence")

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 4/10


10/31/22, 12:54 AM Assignment_3_Instructions

In [ ]: vocab = fit(corpus)
print(vocab)

{'and': 0, 'document': 1, 'first': 2, 'is': 3, 'one': 4, 'second': 5, 'the': 6, 'thi


rd': 7, 'this': 8}
The obtained vocab is same as get_feature_names

In [ ]:
import math
corpus = [
'this is the first document',
'this document is the second document',
'and this is the third one',
'is this the first document',
]
def idf(x):
idf_val = {}
count_dict = {}
for w in list(vocab.keys()):
count_d = 0
for i in x:
if w in i.split():
count_d = count_d+1
count_dict[w] = count_d
#print(count_dict[w])
idf_val[w] = 1 + math.log((1+len(x))/(1+count_dict[w]))
#print(count_dict)
return idf_val

print(idf(corpus))

def tf(y):
tf_val = {}
for row in y:
wrd_frq = dict(Counter(row.split()))
for w in list(vocab.keys()):
if w in wrd_frq.keys():
tf = wrd_frq[w]/(sum(wrd_frq.values()))
tf_val[w] = tf
#print(tf_val)
return tf_val

#print(tf(corpus))

{'and': 1.916290731874155, 'document': 1.2231435513142097, 'first': 1.51082562376599


07, 'is': 1.0, 'one': 1.916290731874155, 'second': 1.916290731874155, 'the': 1.0, 't
hird': 1.916290731874155, 'this': 1.0}
idf values are same as obtained from vectorizer.idf_

In [ ]:
def transform(data1, vocab):
srow = []
scolumn = []
svalue = []
if isinstance(data1, list):
for indx,row in enumerate(tqdm(data1)):
wrd_frq = dict(Counter(row.split()))
for wrd,frq in wrd_frq.items():
if len(wrd)<2:
continue
col_indx = vocab.get(wrd, -2)

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 5/10


10/31/22, 12:54 AM Assignment_3_Instructions

if col_indx != -2:
tf_idf = tf(corpus)[wrd]*idf(corpus)[wrd]
srow.append(indx)
scolumn.append(col_indx)
svalue.append(tf_idf)

return csr_matrix((svalue, (srow, scolumn)), shape=(len(data1), len(vocab)))


else:
print("need to pass list of strings")

In [ ]:
#strings = ["the method of lagrange multipliers is the economists workhorse for solv
# "the technique is a centerpiece of economic theory but unfortunately its
vocab = fit(corpus)
#print(list(vocab.keys()))
m = transform(corpus, vocab)
#print(m)
print(normalize(m, norm='l2')[0])
#print(m.toarray())
print(normalize(m, norm='l2')[0].toarray())

100%|██████████| 4/4 [00:00<00:00, 444.59it/s]


(0, 1) 0.4697913855799205
(0, 2) 0.580285823684436
(0, 3) 0.3840852409148149
(0, 6) 0.3840852409148149
(0, 8) 0.3840852409148149
[[0. 0.46979139 0.58028582 0.38408524 0. 0.
0.38408524 0. 0.38408524]]

the values are same as obtained from skl_output[0].toarray()

Task-2
2. Implement max features functionality:

As a part of this task you have to modify your fit and transform functions so that your vocab
will contain only 50 terms with top idf scores.

This task is similar to your previous task, just that here your vocabulary is limited to only
top 50 features names based on their idf values. Basically your output will have exactly 50
columns and the number of rows will depend on the number of documents you have in your
corpus.

Here you will be give a pickle file, with file name cleaned_strings. You would have to load
the corpus from this file and use it as input to your tfidf vectorizer.

Steps to approach this task:


1. You would have to write both fit and transform methods for your custom
implementation of tfidf vectorizer, just like in the previous task. Additionally, here you
have to limit the number of features generated to 50 as described above.
2. Now sort your vocab based in descending order of idf values and print out the words in
the sorted voacb after you fit your data. Here you should be getting only 50 terms in

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 6/10


10/31/22, 12:54 AM Assignment_3_Instructions

your vocab. And make sure to print idf values for each term in your vocab.
3. Make sure the output of your implementation is a sparse matrix. Before generating the
final output, you need to normalize your sparse matrix using L2 normalization. You can
refer to this link https://scikit-
learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
4. Now check the output of a single document in your collection of documents, you can
convert the sparse matrix related only to that document into dense matrix and print it.
And this dense matrix should contain 1 row and 50 columns.

In [ ]:
# Below is the code to load the cleaned_strings pickle file provided
# Here corpus is of list type

import pickle
with open(r'C:\Users\HP\OneDrive\Applied ai\Module 3\assignment\Implementing TFIDF v
corpus1 = pickle.load(f)

# printing the length of the corpus loaded


print("Number of documents in corpus = ",len(corpus1))

Number of documents in corpus = 746

In [ ]:
# Write your code here.
# Try not to hardcode any values.
# Make sure its well documented and readble with appropriate comments.

In [ ]:
def fit(data1):
unq_words = set()
if isinstance(data1, (list)):
for row in data1:
for wrd in row.split(" "):
if len(wrd) < 2:
continue
unq_words.add(wrd)
unq_words = sorted(list(unq_words))
worde = list(enumerate(unq_words))
vocab_dict = {}
for i in range(len(worde)):
vocab_dict[worde[i][1]] = worde[i][0]
return vocab_dict
else:
print("pass list of sentence")

In [ ]:
vocab = fit(corpus1)
#print(vocab)

In [ ]:
def idf(x):
idf_val = {}
count_dict = {}
for w in list(vocab.keys()):
count_d = 0
for i in x:
if w in i.split():
count_d = count_d+1
count_dict[w] = count_d
file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 7/10
10/31/22, 12:54 AM Assignment_3_Instructions

#print(count_dict[w])
idf_val[w] = 1 + math.log((1+len(x))/(1+count_dict[w]))
#print(count_dict)
return idf_val

#print(idf(corpus1))

In [ ]:
def idf_50(k):
vocab_50_sort = {}
idf_sort_50 = {}
d_initial = idf(k)
idf_sort = sorted(d_initial.items(), key = lambda x:x[1], reverse= True)
idf_sort_50lst = idf_sort[0:50]
#print(idf_sort_50lst)
for i in range(len(idf_sort_50lst)):
vocab_50_sort[idf_sort[i][0]] = vocab[idf_sort[i][0]]
idf_sort_50[idf_sort[i][0]] = idf_sort[i][1]
return vocab_50_sort, idf_sort_50
vocab_50,idfnew = idf_50(vocab)
print(vocab_50)
print(idfnew)

{'aailiyah': 0, 'abandoned': 1, 'ability': 2, 'abroad': 3, 'absolutely': 4, 'abstrus


e': 5, 'abysmal': 6, 'academy': 7, 'accents': 8, 'accessible': 9, 'acclaimed': 10,
'accolades': 11, 'accurate': 12, 'accurately': 13, 'accused': 14, 'achievement': 15,
'achille': 16, 'ackerman': 17, 'act': 18, 'acted': 19, 'acting': 20, 'action': 21,
'actions': 22, 'actor': 23, 'actors': 24, 'actress': 25, 'actresses': 26, 'actuall
y': 27, 'adams': 28, 'adaptation': 29, 'add': 30, 'added': 31, 'addition': 32, 'admi
ns': 33, 'admiration': 34, 'admitted': 35, 'adorable': 36, 'adrift': 37, 'adventur
e': 38, 'advise': 39, 'aerial': 40, 'aesthetically': 41, 'affected': 42, 'affleck':
43, 'afraid': 44, 'africa': 45, 'afternoon': 46, 'age': 47, 'aged': 48, 'ages': 49}
{'aailiyah': 8.27482599910299, 'abandoned': 8.27482599910299, 'ability': 8.274825999
10299, 'abroad': 8.27482599910299, 'absolutely': 8.27482599910299, 'abstruse': 8.274
82599910299, 'abysmal': 8.27482599910299, 'academy': 8.27482599910299, 'accents': 8.
27482599910299, 'accessible': 8.27482599910299, 'acclaimed': 8.27482599910299, 'acco
lades': 8.27482599910299, 'accurate': 8.27482599910299, 'accurately': 8.274825999102
99, 'accused': 8.27482599910299, 'achievement': 8.27482599910299, 'achille': 8.27482
599910299, 'ackerman': 8.27482599910299, 'act': 8.27482599910299, 'acted': 8.2748259
9910299, 'acting': 8.27482599910299, 'action': 8.27482599910299, 'actions': 8.274825
99910299, 'actor': 8.27482599910299, 'actors': 8.27482599910299, 'actress': 8.274825
99910299, 'actresses': 8.27482599910299, 'actually': 8.27482599910299, 'adams': 8.27
482599910299, 'adaptation': 8.27482599910299, 'add': 8.27482599910299, 'added': 8.27
482599910299, 'addition': 8.27482599910299, 'admins': 8.27482599910299, 'admiratio
n': 8.27482599910299, 'admitted': 8.27482599910299, 'adorable': 8.27482599910299, 'a
drift': 8.27482599910299, 'adventure': 8.27482599910299, 'advise': 8.27482599910299,
'aerial': 8.27482599910299, 'aesthetically': 8.27482599910299, 'affected': 8.2748259
9910299, 'affleck': 8.27482599910299, 'afraid': 8.27482599910299, 'africa': 8.274825
99910299, 'afternoon': 8.27482599910299, 'age': 8.27482599910299, 'aged': 8.27482599
910299, 'ages': 8.27482599910299}

In [ ]:
def tf(y,vocab_50):
tf_val = {}
for row in y:
wrd_frq = dict(Counter(row.split()))
for w in list(vocab_50.keys()):
if w in wrd_frq.keys():
tf = wrd_frq[w]/(sum(wrd_frq.values()))
tf_val[w] = tf
#print(tf_val)
return tf_val

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 8/10


10/31/22, 12:54 AM Assignment_3_Instructions

tfnew = tf(corpus1,vocab_50)
print(tfnew)

{'acting': 0.1111111111111111, 'adorable': 0.09090909090909091, 'absolutely': 0.1428


5714285714285, 'actor': 0.14285714285714285, 'actors': 0.1, 'actually': 0.1428571428
5714285, 'addition': 0.09090909090909091, 'acted': 0.2, 'accused': 0.055555555555555
55, 'afraid': 0.14285714285714285, 'advise': 0.5, 'affleck': 0.05555555555555555, 'a
ge': 0.1, 'abstruse': 0.0014326647564469914, 'accurately': 0.0014326647564469914, 'a
ction': 0.058823529411764705, 'actress': 0.25, 'admiration': 0.0014326647564469914,
'adrift': 0.0014326647564469914, 'aerial': 0.25, 'actresses': 0.1111111111111111, 'a
ctions': 0.1, 'adventure': 0.3333333333333333, 'affected': 0.043478260869565216, 'ab
road': 0.06666666666666667, 'admitted': 0.1111111111111111, 'admins': 0.166666666666
66666, 'abandoned': 0.03225806451612903, 'afternoon': 0.125, 'aged': 0.0769230769230
7693, 'add': 0.08333333333333333, 'accolades': 0.07692307692307693, 'abysmal': 0.002
4271844660194173, 'accents': 0.3333333333333333, 'africa': 0.0625, 'academy': 0.0714
2857142857142, 'adaptation': 0.0625, 'accessible': 0.2, 'achievement': 0.00242718446
60194173, 'aailiyah': 0.09090909090909091, 'ability': 0.16666666666666666, 'adams':
0.047619047619047616, 'achille': 0.06666666666666667, 'acclaimed': 0.0625, 'ackerma
n': 0.0024271844660194173, 'act': 0.1111111111111111, 'ages': 0.0024271844660194173,
'aesthetically': 0.07692307692307693, 'accurate': 0.1111111111111111, 'added': 0.111
1111111111111}

In [ ]:
def transform(data1, vocab_50):
srow = []
scolumn = []
svalue = []
if isinstance(data1, (list,)):
for indx,row in enumerate(tqdm(data1)):
for wrd,frq in idfnew.items():
if len(wrd)<2:
continue
col_indx = vocab_50.get(wrd, -2)

if col_indx != -2:
tf_idf = tfnew[wrd]*idfnew[wrd]
srow.append(indx)
scolumn.append(col_indx)
svalue.append(tf_idf)

return normalize((csr_matrix((svalue, (srow, scolumn)), shape=(len(data1), l


else:
print("need to pass list of strings")

In [ ]:
vocab = fit(corpus1)

n = transform(corpus1, vocab_50)

print(n[0].toarray())
print(n[0].toarray().shape)
print(n.shape)
#print(n)

100%|██████████| 746/746 [00:00<00:00, 10367.18it/s]


[[0.0898299 0.03187512 0.16468814 0.06587526 0.14116127 0.00141566
0.00239837 0.07058063 0.32937629 0.19762577 0.06175805 0.07600991
0.1097921 0.00141566 0.05489605 0.00239837 0.06587526 0.00239837
0.1097921 0.19762577 0.1097921 0.05812523 0.09881289 0.14116127
0.09881289 0.24703222 0.1097921 0.14116127 0.04705376 0.06175805
0.08234407 0.1097921 0.0898299 0.16468814 0.00141566 0.1097921
0.0898299 0.00141566 0.32937629 0.49406443 0.24703222 0.07600991
0.04296212 0.05489605 0.14116127 0.06175805 0.12351611 0.09881289
0.07600991 0.00239837]]
file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 9/10
10/31/22, 12:54 AM Assignment_3_Instructions

(1, 50)
(746, 50)

In [ ]:

file:///C:/Users/HP/OneDrive/Applied ai/Module 3/assignment/Implementing TFIDF vectorizer/Assignment_3_Instructions.html 10/10

You might also like