Natural Language Processing Manual
Natural Language Processing Manual
Introduction
According to industry estimates, only 21% of the available data is present in structured form.
various other activities. Majority of this data exists in the textual form, which is highly
unstructured in nature.
Few notorious examples include – tweets / posts on social media, user to user chat conversations,
news, blogs and articles, product or services reviews and patient records in the healthcare sector.
A few more recent ones includes chatbots and other voice driven bots.
Despite having high dimension data, the information present in it is not directly accessible unless
In order to produce significant and actionable insights from text data, it is important to get
acquainted with the techniques and principles of Natural Language Processing (NLP).
So, if you plan to create chatbots this year, or you want to use the power of unstructured text, this
guide is the right starting point. This guide unearths the concepts of natural language processing,
its techniques and implementation. The aim of the article is to teach the concepts of natural
language processing and apply it on real data set. Moreover, we also have a video based course
Overview
Learn various techniques for implementing NLP including parsing & text processing
Few notorious examples include – tweets / posts on social media, user to user chat conversations,
news, blogs and articles, product or services reviews and patient records in the healthcare sector.
A few more recent ones includes chatbots and other voice driven bots.
Despite having high dimension data, the information present in it is not directly accessible unless
it is processed (read and understood) manually or analyzed by an automated system.
In order to produce significant and actionable insights from text data, it is important to get
acquainted with the techniques and principles of Natural Language Processing (NLP).
So, if you plan to create chatbots this year, or you want to use the power of unstructured text, this
guide is the right starting point. This guide unearths the concepts of natural language processing,
its techniques and implementation. The aim of the article is to teach the concepts of natural
language processing and apply it on real data set. Moreover, we also have a video based course
on NLP with 3 real life projects.
Table of Contents
1. Introduction to NLP
2. Text Preprocessing
o Noise Removal
o Lexicon Normalization
Lemmatization
Stemming
o Object Standardization
o Syntactical Parsing
Dependency Grammar
o Entity Parsing
Phrase Detection
Topic Modelling
N-Grams
o Statistical features
TF – IDF
Readability Features
o Word Embeddings
o Text Matching
Levenshtein Distance
Phonetic Matching
o Coreference Resolution
o Other Problems
NLP is a branch of data science that consists of systematic processes for analyzing,
understanding, and deriving information from the text data in a smart and efficient manner. By
utilizing NLP and its components, one can organize the massive chunks of text data, perform
numerous automated tasks and solve a wide range of problems such as – automatic
Before moving further, I would like to explain some terms that are used in the article:
Download NLTK data: run python shell (in terminal) and write the following code:
import nltk
nltk.download()
Follow the instructions on screen and download the desired package or collection. Other libraries
present in it and the data is not readily analyzable without any pre-processing. The entire process
of cleaning and standardization of text, making it noise-free and ready for analysis is known as
text preprocessing.
Noise Removal
Lexicon Normalization
Object Standardization
For example – language stopwords (commonly used words of a language – is, am, the, of, in etc),
URLs or links, social media entities (mentions, hashtags), punctuations and industry specific
words. This step deals with removal of all types of noisy entities present in the text.
A general approach for noise removal is to prepare a dictionary of noisy entities, and iterate the
text object by tokens (or by words), eliminating those tokens which are present in the noise
dictionary.
Python Code:
Another approach is to use the regular expressions while dealing with special patterns of
noise. We have explained regular expressions in detail in one of our previous article. Following
```
regex_pattern = "#[\w]*"
For example – “play”, “player”, “played”, “plays” and “playing” are the different variations of
the word – “play”, Though they mean different but contextually all are similar. The step converts
all the disparities of a word into their normalized form (also known as lemma). Normalization is
a pivotal step for feature engineering with text as it converts the high dimensional features (N
different features) to the low dimensional space (1 feature), which is an ideal ask for any ML
model.
Lemmatization: Lemmatization, on the other hand, is an organized & step by step procedure of
obtaining the root form of the word, it makes use of vocabulary (dictionary importance of words)
Below is the sample code that performs lemmatization and stemming using python’s popular
library – NLTK.
```
word = "multiplying"
lem.lemmatize(word, "v")
>> "multiply"
stem.stem(word)
>> "multipli"
```
2.3 Object Standardization
Text data often contains words or phrases which are not present in any standard lexical
dictionaries. These pieces are not recognized by search engines and models.
Some of the examples are – acronyms, hashtags with attached words, and colloquial slangs. With
the help of regular expressions and manually prepared data dictionaries, this type of noise can be
fixed, the code below uses a dictionary lookup method to replace social media slangs from a text.
```
lookup_dict = {'rt':'Retweet', 'dm':'direct message', "awsm" :
"awesome", "luv" :"love", "..."}
def _lookup_words(input_text):
words = input_text.split()
new_words = []
for word in words:
if word.lower() in lookup_dict:
word = lookup_dict[word.lower()]
new_words.append(word) new_text = " ".join(new_words)
return new_text
```
Apart from three steps discussed so far, other types of text preprocessing includes encoding-
decoding noise, grammar checker, and spelling correction etc. The detailed article about
usage, text features can be constructed using assorted techniques – Syntactical Parsing, Entities /
Syntactical parsing invol ves the analysis of words in the sentence for grammar and their
arrangement in a manner that shows the relationships among the words. Dependency Grammar
and Part of Speech tags are the important attributes of text syntactics.
Dependency Trees – Sentences are composed of some words sewed together. The relationship
among the words in a sentence is determined by the basic dependency grammar. Dependency
grammar is a class of syntactic text analysis that deals with (labeled) asymmetrical binary
relations between two lexical items (words). Every relation can be represented in the form of a
triplet (relation, governor, dependent). For example: consider the sentence – “Bills on ports and
among the words can be observed in the form of a tree representation as shown:
word of this sentence, and is linked by two sub-trees (subject and object subtrees). Each subtree
is a itself a dependency tree with relations such as – (“Bills” <-> “ports” <by> “proposition”
This type of tree, when parsed recursively in top-down manner gives grammar relation triplets as
output which can be used as features for many nlp problems like entity wise sentiment analysis,
actor & entity identification, and text classification. The python wrapper StanfordCoreNLP (by
Stanford NLP Group, only commercial license) and NLTK dependency grammars can be used to
Part of speech tagging – Apart from the grammar relations, every word in a sentence is also
associated with a part of speech (pos) tag (nouns, verbs, adjectives, adverbs etc). The pos tags
defines the usage and function of a word in the sentence. H ere is a list of all possible pos-tags
defined by Pennsylvania university. Following code using NLTK performs pos tagging
annotation on input text. (it provides several implementations, the default one is perceptron
tagger)
```
from nltk import word_tokenize, pos_tag
text = "I am learning Natural Language Processing on Analytics Vidhya"
tokens = word_tokenize(text)
print pos_tag(tokens)
>>> [('I', 'PRP'), ('am', 'VBP'), ('learning', 'VBG'), ('Natural', 'NNP'),('Language', 'NNP'),
('Processing', 'NNP'), ('on', 'IN'), ('Analytics', 'NNP'),('Vidhya', 'NNP')]
```
A.Word sense disambiguation: Some language words have multiple meanings according to
“Book” is used with different context, however the part of speech tag for both of the cases are
different. In sentence I, the word “book” is used as v erb, while in II it is used as no un. (Lesk
B.Improving word-based features: A learning model could learn different contexts of a word
when used word as the features, however if the part of speech tag is linked with them, the context
Tokens – (“book”, 2), (“my”, 1), (“flight”, 1), (“I”, 1), (“will”, 1), (“read”, 1), (“this”, 1)
Tokens with POS – (“book_VB”, 1), (“my_PRP$”, 1), (“flight_NN”, 1), (“I_PRP”, 1),
C. Normalization and Lemmatization: POS tags are the basis of lemmatization process for
D.Efficient stopword removal : P OS tags are also useful in efficient removal of stopwords.
For example, there are some tags which always define the low frequency / less important words
of a language. For example: (IN – “within”, “upon”, “except”), (CD – “one”,”two”, “hundred”),
Entities are defined as the most important chunks of a sentence – noun phrases, verb phrases or
both. Entity Detection algorithms are generally ensemble models of rule based parsing,
dictionary lookups, pos tagging and dependency parsing. The applicability of entity detection can
be seen in the automated chat bots, content analyzers and consumer insights.
Topic Modelling & Named Entity Recognition are the two key entity detection methods in NLP.
The process of detecting the named entities such as person names, location names, company
Sentence – Sergey Brin, the manager of Google Inc. is walking in the streets of New York.
Named Entities – ( “person” : “Sergey Brin” ), (“org” : “Google Inc.”), (“location” : “New
York”)
Noun phrase identification: This step deals with extracting all the noun phrases from a text
Phrase classification: This is the classification step in which all the extracted noun phrases are
classified into respective categories (locations, names etc). Google Maps API provides a good
path to disambiguate locations, Then, the open databases from dbpedia, wikipedia can be used to
identify person names or company names. Apart from this, one can curate the lookup tables and
Entity disambiguation: Sometimes it is possible that entities are misclassified, hence creating a
validation layer on top of the results is useful. Use of knowledge graphs can be exploited for this
purposes. The popular knowledge graphs are – Google Knowledge Graph, IBM Watson and
Wikipedia.
B. Topic Modeling
Topic modeling is a process of automatically identifying the topics present in a text corpus, it
derives the hidden patterns among the words in the corpus in an unsupervised manner. Topics are
defined as “a repeating pattern of co-occurring terms in a corpus”. A good topic model results in
– “health”, “doctor”, “patient”, “hospital” for a topic – Healthcare, and “farm”, “crops”, “wheat”
the code to implement topic modeling using LDA in python. For a detailed explanation about its
```
doc1 = "Sugar is bad to consume. My sister likes to have sugar, but not my father."
doc2 = "My father spends a lot of time driving my sister around to dance practice."
doc3 = "Doctors suggest that driving may cause increased stress and blood pressure."
doc_complete = [doc1, doc2, doc3]
doc_clean = [doc.split() for doc in doc_complete]
# Creating the term dictionary of our corpus, where every unique term is assigned an index.
dictionary = corpora.Dictionary(doc_clean)
# Converting list of documents (corpus) into Document Term Matrix using dictionary prepared
above.
doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_clean]
# Results
print(ldamodel.print_topics())
```
C. N-Grams as Features
A combination of N words together are called N-Grams. N grams (N > 1) are generally more
as the most important features of all the others. The following code generates bigram of a text.
```
def generate_ngrams(text, n):
words = text.split()
output = []
for i in range(len(words)-n+1):
output.append(words[i:i+n])
return output
Text data can also be quantified directly into numbers using several techniques described in this
section:
TF-IDF is a weighted model commonly used for information retrieval problems. It aims to
convert the text documents into vector models on the basis of occurrence of words in the
documents without taking considering the exact ordering. For Example – let say there is a dataset
Term Frequency (TF) – TF for a term “t” is defined as the count of a term “t” in a document
“D”
Inverse Document Frequency (IDF) – IDF for a term is defined as logarithm of ratio of total
documents available in the corpus and number of documents containing the term T.
TF . IDF – TF IDF formula gives the relative importance of a term in a corpus (list of
documents), given by the following formula below. Following is the code using python’s scikit
```
from sklearn.feature_extraction.text import TfidfVectorizer
obj = TfidfVectorizer()
corpus = ['This is sample document.', 'another random document.', 'third sample document text']
X = obj.fit_transform(corpus)
print X
>>>
(0, 1) 0.345205016865
(0, 4) ... 0.444514311537
(2, 1) 0.345205016865
(2, 4) 0.444514311537
```
The model creates a vocabulary dictionary and assigns an index to each word. Each row in the
output contains a tuple (i,j) and a tf-idf value of word at index j in document i.
Count or Density based features can also be used in models and analysis. These features might
seem trivial but shows a great impact in learning models. Some of the features are: Word Count,
Sentence Count, Punctuation Counts and Industry specific word counts. Other types of measures
include readability measures such as syllable counts, smog index and flesch reading ease. Refer
Word embedding is the modern way of representing words as vectors. The aim of word
embedding is to redefine the high dimensional word features into low dimensional feature
vectors by preserving the contextual similarity in the corpus. They are widely used in deep
learning models such as Convolutional Neural Networks and Recurrent Neural Networks.
Word2Vec and GloVe are the two popular models to create word embedding of a text. These
models takes a text corpus as input and produces the word vectors as output.
Word2Vec model is composed of preprocessing module, a shallow neural network model called
Continuous Bag of Words and another shallow neural network model called skip-gram. These
models are widely used for all other nlp problems. It first constructs a vocabulary from the
training corpus and then learns word embedding representations. Following code using gensim
```
from gensim.models import Word2Vec
sentences = [['data', 'science'], ['vidhya', 'science', 'data', 'analytics'],['machine', 'learning'], ['deep',
'learning']]
print model['learning']
>>> array([ 0.00459356 0.00303564 -0.00467622 0.00209638, ...])
```
They can be used as feature vectors for ML model, used to measure text similarity using cosine
This section talks about different use cases and problems in the field of natural language
processing.
Text classification is one of the classical problem of NLP. Notorious examples include – Email
object (document or sentence) in one of the fixed category. It is really helpful when the amount
of data is too large, especially for organizing, information filtering, and storage purposes.
A typical natural language classifier consists of two parts: (a) Training (b) Prediction as shown in
image below. Firstly the text input is processes and features are created. The machine learning
models then learn these features and is used for predicting against the new text.
Here is a code that uses naive bayes classifier using text blob library (built on top of nltk).
```
from textblob.classifiers import NaiveBayesClassifier as NBC
from textblob import TextBlob
training_corpus = [
('I am exhausted of this work.', 'Class_B'),
("I can't cooperate with this", 'Class_B'),
('He is my badest enemy!', 'Class_B'),
('My management is poor.', 'Class_B'),
('I love this burger.', 'Class_A'),
('This is an brilliant place!', 'Class_A'),
('I feel very good about these dates.', 'Class_A'),
('This is my best work.', 'Class_A'),
("What an awesome view", 'Class_A'),
('I do not like this dish', 'Class_B')]
test_corpus = [
("I am not feeling well today.", 'Class_B'),
("I feel brilliant!", 'Class_A'),
('Gary is a friend of mine.', 'Class_A'),
("I can't believe I'm doing this.", 'Class_B'),
('The date was good.', 'Class_A'), ('I do not enjoy my job', 'Class_B')]
model = NBC(training_corpus)
print(model.classify("Their codes are amazing."))
>>> "Class_A"
print(model.classify("I don't like their computer."))
>>> "Class_B"
print(model.accuracy(test_corpus))
>>> 0.83
```
```
from sklearn.feature_extraction.text
import TfidfVectorizer from sklearn.metrics
import classification_report
from sklearn import svm
# preparing data for SVM model (using the same training_corpus, test_corpus from naive bayes
example)
train_data = []
train_labels = []
for row in training_corpus:
train_data.append(row[0])
train_labels.append(row[1])
test_data = []
test_labels = []
for row in test_corpus:
test_data.append(row[0])
test_labels.append(row[1])
The text classification model are heavily dependent upon the quality and quantity of features,
while applying any machine learning model it is always a good practice to include more and
more training data. H ere are some tips that I wrote about improving the text classification
One of the important areas of NLP is the matching of text objects to find similarities. Important
applications of text matching includes automatic spelling correction, data de-duplication and
A number of text matching techniques are available depending upon the requirement. This
A. Levenshtein Distance – The Levenshtein distance between two strings is defined as the
minimum number of edits needed to transform one string into the other, with the allowable edit
print(levenshtein("analyze","analyse"))
```
name, location name etc) and produces a character string that identifies a set of words that are
(roughly) phonetically similar. It is very useful for searching large text corpuses, correcting
spelling errors and matching relevant names. Soundex and Metaphone are two main phonetic
algorithms used for this purpose. Python’s module Fuzzy is used to compute soundex strings for
```
import fuzzy
soundex = fuzzy.Soundex(4)
print soundex('ankit')
>>> “A523”
print soundex('aunkit')
>>> “A523”
```
C. Flexible String Matching – A complete text matching system includes different algorithms
pipelined together to compute variety of text variations. Regular expressions are really helpful
for this purposes as well. Another common techniques include – exact string matching,
lemmatized matching, and compact matching (takes care of spaces, punctuation’s, slangs etc).
D. Cosine Similarity – W hen the text is represented as vector notation, a general cosine
similarity can also be applied in order to measure vectorized similarity. Following code converts
a text to vectors (using term frequency) and applies cosine similarity to provide closeness among
two text.
```
import math
from collections import Counter
def get_cosine(vec1, vec2):
common = set(vec1.keys()) & set(vec2.keys())
numerator = sum([vec1[x] * vec2[x] for x in common])
if not denominator:
return 0.0
else:
return float(numerator) / denominator
def text_to_vector(text):
words = text.split()
return Counter(words)
vector1 = text_to_vector(text1)
vector2 = text_to_vector(text2)
cosine = get_cosine(vector1, vector2)
>>> 0.62
```
4.3 Coreference Resolution
Coreference Resolution is a process of finding relational links among the words (or phrases)
within the sentences. Consider an example sentence: ” Donald went to John’s office to see the
table (and not John’s office). Coreference Resolution is the component of NLP that does this job
Machine Translation – Automatically translate text from one human language to another by
taking care of grammar, semantics and information about the real world, etc.
databases or semantic intents into readable human language is called language generation.
Converting chunks of text into more logical structures that are easier for computer programs to
Optical Character Recognition – Given an image representing printed text, determine the
corresponding text.
Natural Language Toolkit (NLTK): The complete toolkit for all NLP techniques.
Pattern – A web mining module for the with tools for NLP and machine learning.
TextBlob – Easy to use nl p tools API, built on top of NLTK and Pattern.
OUTPUT:
book_dir = "./Books"
os.listdir(book_dir)
# check >>>stats
title_num = 1
for language in os.listdir(book_dir):
for author in os.listdir(book_dir+"/"+language):
for title in os.listdir(book_dir+"/"+language+"/"+author):
inputfile = book_dir+"/"+language+"/"+author+"/"+title
print(inputfile)
text = read_book(inputfile)
(num_unique, counts) = word_stats(count_words_fast(text))
stats.loc[title_num]= language,
author.capitalize(),
title.replace(".txt", ""),
sum(counts), num_unique
title_num+= 1
import matplotlib.pyplot as plt
plt.plot(stats.length, stats.unique, "bo-")
plt.legend()
plt.xlabel("Book Length")
plt.ylabel("Number of Unique words")
plt.savefig("fig.pdf")
plt.show()
# word_counts = count_words_fast(text)
def word_stats(word_counts):
num_unique = len(word_counts)
counts = word_counts.values()
return (num_unique, counts)
import random
url="https://svnweb.freebsd.org/csrg/share/dict/words?revision=61569&view=co"
web_byte = urlopen(req).read()
webpage = web_byte.decode('utf-8')
first500 = webpage[:500].split("\n")
random.shuffle(first500)
print(first500)
Output
output:
What PRON
is AUX
the DET
weather NOUN
like ADP
today NOUN
? PUNCT
The DET
weather NOUN
is AUX
sunny ADJ
. PUNCT
I PRON
went VERB
to ADP
the DET
store NOUN
, PUNCT
but CCONJ
they PRON
were AUX
closed ADJ
, PUNCT
so CCONJ
I PRON
had VERB
to PART
go VERB
PROGRAM 4:
Language modeling is the way of determining the probability of any sequence of words.
Language modeling is used in a wide variety of applications such as Speech Recognition,
Spam filtering, etc. In fact, language modeling is the key aim behind the implementation of
many state-of-the-art Natural Language Processing models.
Methods of Language Modelings:
Two types of Language Modelings:
Statistical Language Modelings: Statistical Language Modeling, or Language Modeling,
is the development of probabilistic models that are able to predict the next word in the
sequence given the words that precede. Examples such as N-gram language modeling.
Neural Language Modelings: Neural network methods are achieving better results than
classical methods both on standalone language models and when models are incorporated
into larger models on challenging tasks like speech recognition and machine translation. A
way of performing a neural language model is through word embeddings.
N-gram
N-gram can be defined as the contiguous sequence of n items from a given sample of text or
speech. The items can be letters, words, or base pairs according to the application. The N-
grams typically are collected from a text or speech corpus (A long text dataset).
N-gram Language Model:
An N-gram language model predicts the probability of a given N-gram within any sequence of
words in the language. A good N-gram model can predict the next word in the sentence i.e the
value of p(w|h)
Example of N-gram such as unigram (“This”, “article”, “is”, “on”, “NLP”) or bi-gram (‘This
article’, ‘article is’, ‘is on’,’on NLP’).
Now, we will establish a relation on how to find the next word in the sentence using
. We need to calculate p(w|h), where is the candidate for the next word. For example in the
above example, lets’ consider, we want to calculate what is the probability of the last word
being “NLP” given the previous words:
import string
import random
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('reuters')
from nltk.corpus import reuters
from nltk import FreqDist
tokenized_text.append(sentence)
bigram.extend(list(ngrams(sentence, 2,pad_left=True, pad_right=True)))
trigram.extend(list(ngrams(sentence, 3, pad_left=True, pad_right=True)))
d = defaultdict(Counter)
for a, b, c in freq_tri:
if(a != None and b!= None and c!= None):
d[a, b] += freq_tri[a, b, c]
OUTPUT:
he said
he said kotc
he said kotc made
he said kotc made profits
he said kotc made profits of
he said kotc made profits of 265
he said kotc made profits of 265 ,
he said kotc made profits of 265 , 457
he said kotc made profits of 265 , 457 vs
he said kotc made profits of 265 , 457 vs loss
he said kotc made profits of 265 , 457 vs loss eight
he said kotc made profits of 265 , 457 vs loss eight cts
he said kotc made profits of 265 , 457 vs loss eight cts net
he said kotc made profits of 265 , 457 vs loss eight cts net loss
he said kotc made profits of 265 , 457 vs loss eight cts net loss 343
he said kotc made profits of 265 , 457 vs loss eight cts net loss 343 ,
he said kotc made profits of 265 , 457 vs loss eight cts net loss 343 , 266
he said kotc made profits of 265 , 457 vs loss eight cts net loss 343 ,
266 ,
he said kotc made profits of 265 , 457 vs loss eight cts net loss 343 ,
266 , 000
he said kotc made profits of 265 , 457 vs loss eight cts net loss 343 ,
266 , 000 shares
program 5:
N-Gram smoothing
An N-gram is a sequence of N words: a 2-gram (or bigram) is a two-word
sequence of words like “lütfen ödevinizi”, “ödevinizi çabuk”, or ”çabuk
veriniz”, and a 3-gram (or trigram) is a three-word sequence of words like
“lütfen ödevinizi çabuk”, or “ödevinizi çabuk veriniz”.
Smoothing
Laplace Smoothing
The simplest way to do smoothing is to add one to all the bigram counts,
before we normalize them into probabilities. All the counts that used to be
zero will now have a count of 1, the counts of 1 will be 2, and so on. This
algorithm is called Laplace smoothing.
Add-k Smoothing
One alternative to add-one smoothing is to move a bit less of the probability
mass from the seen to the unseen events. Instead of adding 1 to each count,
we add a fractional count k. This algorithm is therefore called add-k
smoothing.
Requirements
Python
To check if you have a compatible version of Python installed, use the
following command:
python -V
You can find the latest version of Python here.
Git
Pip Install
Download Code
In order to work on code, create a fork from GitHub page. Use Git for cloning
the code to your local or below line for Ubuntu:
Start IDE
Select File | Open from main menu
Choose NGram-PY file
Select open as project option
Couple of seconds, dependencies will be downloaded.
Detailed Description
Training NGram
Using NGram
Saving NGram
Loading NGram
Training NGram
NGram(N: int)
For example,
a = NGram(2)
this creates an empty NGram model.
nGram = NGram(2)
nGram.addNGramSentence(["jack", "read", "books", "john", "mary", "went"])
nGram.addNGramSentence(["jack", "read", "books", "mary", "went"])
with the lines above, an empty NGram model is created and two sentences
are added to the bigram model.
a.calculateNGramProbabilities(NoSmoothing())
LaplaceSmoothing class is a simple smoothing technique for smoothing. It
doesn't require training. Probabilities are calculated adding 1 to each
counter. For example, to calculate the probabilities of a given NGram model
using LaplaceSmoothing:
a.calculateNGramProbabilities(LaplaceSmoothing())
GoodTuringSmoothing class is a complex smoothing technique that doesn't
require training. To calculate the probabilities of a given NGram model using
GoodTuringSmoothing:
a.calculateNGramProbabilities(GoodTuringSmoothing())
AdditiveSmoothing class is a smoothing technique that requires training.
a.calculateNGramProbabilities(AdditiveSmoothing())
Using NGram
a.getProbability("jack", "reads")
To find the trigram probability:
Saving NGram
a.saveAsText("model.txt");
Loading NGram
NGram(fileName: str)
For example,
a = NGram("model.txt")
this loads an NGram model in the file "model.txt".
from setuptools
import setup
from pathlib import Path
this_directory = Path(__file__).parent
long_description = (this_directory /
"README.md").read_text(encoding="utf-8")
setup(
name='NlpToolkit-NGram',
version='1.0.19',
packages=['NGram', 'test'],
url='https://github.com/StarlangSoftware/NGram-Py',
license='',
author='olcaytaner',
author_email='[email protected]',
description='NGram library',
install_requires=['NlpToolkit-DataStructure', 'NlpToolkit-
Sampling'],
long_description=long_description,
long_description_content_type='text/markdown'
)
from NGram.NGram
import NGram
from NGram.SimpleSmoothing import SimpleSmoothing
class NoSmoothing(SimpleSmoothing):
def setProbabilities(self,
nGram: NGram,
level: int):
nGram.setProbabilityWithPseudoCount(0.0, level)