Traditional Word Embedding
Traditional Word Embedding
Pre Word Embedding era – Frequency or Statistical based word Embedding approaches
Recent Word Embedding era – Prediction based word Embedding approaches
4. Pre word embedding era Techniques
Document
A document is a single text data point. For Example, a review of a particular product by the user.
Corpus
It a collection of all the documents present in our dataset.
Feature
Every unique word in the corpus is considered as a feature.
Sentences:
Dog hates a cat. It loves to go out and play.
Cat loves to play with a ball.
We can build a corpus from the above 2 documents just by combining them.
Corpus = “Dog hates a cat. It loves to go out and play. Cat loves to play with a ball.”
Fetaures: [‘and’, ‘ball’, ‘cat’, ‘dog’, ‘go’, ‘hates’, ‘it’, ‘loves’, ‘out’, ‘play’, ‘to’, ‘with’]
We will call it a feature vector. Here we remember that we will remove ‘a’ by considering it a single character.
In short, we can say that to build any model in machine learning or deep learning, the final level data has to be in numerical form
because models don’t understand text or image data directly as humans do.
Therefore, Vectorization or word embedding is the process of converting text data to numerical vectors. Later those vectors are
used to build various machine learning models. In this manner, we say this as extracting features with the help of text with an
aim to build multiple natural languages, processing models, etc. We have different ways to convert the text data to numerical
vectors which we will discuss in this article later.
A Word Embedding generally tries to map a word using a dictionary to its vector form.
Homework Problem
Do you think that Word Embedding and Text Vectorization are the same things or there is no difference at all between these
two techniques? If yes, Why and If No, then what is the exact difference between these two techniques?
Therefore, the vector representation in this format according to the above dictionary is
2. One-hot encoding does not capture the relationships between different words. Therefore, it does not convey information
about the context.
Count Vectorizer
1. It is one of the simplest ways of doing text vectorization.
2. It creates a document term matrix, which is a set of dummy variables that indicates if a particular word appears in the
document.
3. Count vectorizer will fit and learn the word vocabulary and try to create a document term matrix in which the individual cells
denote the frequency of that word in a particular document, which is also known as term frequency, and the columns are
dedicated to each word in the corpus.
Matrix Formulation
Consider a Corpus C containing D documents {d1,d2…..dD} from which we extract N unique tokens. Now, the dictionary
consists of these N tokens, and the size of the Count Vector matrix M formed is given by D X N. Each row in the matrix M
describes the frequency of tokens present in the document D(i).
The dictionary created contains the list of unique tokens(words) present in the corpus
For Example, for the above matrix formed, let’s see the word vectors generated.
Use the frequency (number of times a word has appeared in the document) or
Presence(has the word appeared in the document?) to be the entry in the count matrix M.
But in general, we preferred the frequency method over the latter.
Tokenization
Vectors Creation
Tokenization
It is the process of dividing each sentence into words or smaller parts, which are known as tokens. After the completion of
tokenization, we will extract all the unique words from the corpus. Here corpus represents the tokens we get from all the
documents and used for the bag of words creation.
Let’s take a good understanding with the help of the following example:
These 3 sentences are example sentences and combined it forms a corpus and our first step is to perform tokenization. Before
tokenization, we will convert all sentences to lowercase letters or uppercase letters. Here, for normalization, we will convert all
the words in the sentences to lowercase.
After dividing the sentences into words and generate a list with all unique words in alphabetical order, we will get the following
output after the tokenization step:
Unique words: [“and”, “affordable.”, “delicious.”, “is”, “not”, “burger”, “tasty”, “this”, “very”]
Creating vectors for each sentence with the frequency of words. This is called a sparse matrix. Below is the sparse matrix of
example sentences.
Therefore, as we discussed here each document is represented as an array having a size same as the length of the total numbers
of features. All the values of this array will be zero apart from one position and that position representing words address inside
the feature vector.
The final BoW representation is the sum of the words feature vector.
2. It does not allow to draw of useful inferences for downstream NLP tasks.
Homework Problem
Do you think there is some kind of relationship between the two techniques which we completed – Count Vectorizer and Bag of
Words? Why or Why not?
N-grams Vectorization
1. Similar to the count vectorization technique, in the N-Gram method, a document term matrix is generated, and each cell
represents the count.
4. N-grams consider the sequence of n words in the text; where n is (1,2,3.. ) like 1-gram, 2-gram. for token pair. Unlike BOW, it
maintains word order.
For Example,
Disadvantages of N-Grams
1. It has too many features.
2. Due to too many features, the feature set becomes too sparse and is computationally expensive.
TF-IDF Vectorization
As we discussed in the above techniques that the BOW method is simple and works well, but the problem with that is that it
treats all words equally. As a result, it cannot distinguish very common words or rare words. So, to solve this problem, TF-IDF
comes into the picture!
Term frequency-inverse document frequency ( TF-IDF) gives a measure that takes the importance of a word into consideration
depending on how frequently it occurs in a document and a corpus.
It is the percentage of the number of times a word (x) occurs in a particular document (y) divided by the total number of words
in that document.
tf (‘word’) = Frequency of a ‘word’ appears in document d / total number of words in the document d
For the above sentence, the term frequency value for word cat will be: tf(‘cat’) = 1 / 6
Note: Sentence “Cat loves to play with a ball” has 7 total words but the word ‘a’ has been ignored.
It is the logarithmic ratio of no. of total documents to no. of a document with a particular word.
This term of the equation helps in pulling out the rare words since the value of the product is maximum when both the terms are
maximum. What does that mean? If a word appears multiple times across many documents, then the denominator df will
increase, reducing the value of the second term. (Refer to the formula given below)
For Example, In any corpus, few words like ‘is’ or ‘and’ are very common, and most likely, they will be present in almost every
document.
Let’s say the word ‘is’ is present in all the documents is a corpus of 1000 documents. The idf for that would be:
2. The difference in the TF-IDF method is that each cell doesn’t indicate the term frequency, but contains a weight value that
signifies how important a word is for an individual text message or document
3. This method is based on the frequency method but it is different from the count vectorization in the sense that it takes into
considerations not just the occurrence of a word in a single document but in the entire corpus.
4. TF-IDF gives more weight to less frequently occurring events and less weight to expected events. So, it penalizes frequently
occurring words that appear frequently in a document such as “the”, “is” but assigns greater weight to less frequent or rare
words.
5. The product of TF x IDF of a word indicates how often the token is found in the document and how unique the token is to the
whole entire corpus of documents.
Now, the implementation of the example discussed in BOW in Python is given below:
Homework Problem
Do you think there can be some variations of the TF-IDF technique based on some criteria? If yes, then mention those
variations, and if no, then why?