0% found this document useful (0 votes)
12 views50 pages

NLP Session I-Unit I and II

The Faculty Orientation Program on Natural Language Processing, organized by Sinhgad Institute of Technology, aims to familiarize participants with fundamental concepts, techniques, and applications of NLP. Key objectives include understanding language syntax and semantics, developing language modeling techniques, and integrating NLP tools for real-world applications. The program also includes a mapping of course outcomes to program outcomes and various teaching methodologies to enhance learning.

Uploaded by

sidhubhu.333
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views50 pages

NLP Session I-Unit I and II

The Faculty Orientation Program on Natural Language Processing, organized by Sinhgad Institute of Technology, aims to familiarize participants with fundamental concepts, techniques, and applications of NLP. Key objectives include understanding language syntax and semantics, developing language modeling techniques, and integrating NLP tools for real-world applications. The program also includes a mapping of course outcomes to program outcomes and various teaching methodologies to enhance learning.

Uploaded by

sidhubhu.333
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Faculty Orientation Program

on
Natural Language Processing
[Elective V : 410252 A]
BE Computer Engineering 2019 Course

Organized By
S.T.E.S.’s Sinhgad Institute of Technology
in association with
BOS Computer Engineering, SPPU, Pune
(20th January 2023)

Prof. Deptii Chaudhari


Assistant Professor, Department of Computer Engineering
Hope Foundation’s International Institute of Information Technology,
Hinjawadi, Pune
[email protected], www.isquareit.edu.in

F.O.P. on NLP – Unit I, II – Prof. Deptii Chaudhari (I2IT, Pune)


tii

tii
tii

ep

ep
ep
Course Objectives

D
D

Introduction ! Integrate
To be familiar with fundamental concepts
To use appropriate tools and techniques for
and techniques of natural language
processing (NLP)
01 04 processing natural languages

Natural

ri
ri

r
ha

ha
ha

ud

ud
ud

Language Syntax and

ha
Language

ha
ha

C
C

Semantics: Core Knowledge Recent Advances: Tools and

tii

tii
tii

ep

ep
ep

Techniques

D
D

To acquire the knowledge of various


morphological, syntactic, and semantic
02 Processing 05 To comprehend the advance real world
applications in NLP domain.
NLP tasks

410252(A)
Language Modelling: Applications
Illustrations
To develop the various language 03 06 To describe Applications of NLP and
Machine Translations.
modeling techniques for NLP

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

Foundation course to understand Data


NLP –Science,
Unit I & IItaught in Engg, Science(I2IT,and Management, part of GATE
ep

ep
ep

D
F.O.P. on – Prof. Deptii Chaudhari Pune)
D
tii

tii
tii

ep

ep
ep
Course Outcomes
2

D
D

Integrate
Introduction ! Integrate the NLP techniques for the
Describe the fundamental concepts of NLP,
challenges and issues in NLP
01 04 information retrieval task

Natural

ri
ri

r
ha

ha
ha

ud

ud
ud

Language Syntax and

ha
Language

ha
ha

C
C

Semantics: Core Knowledge Recent Advances: Tools and

tii

tii
tii

ep

ep
ep

Techniques

D
D

Analyze Natural languages


morphologically, syntactical and
02 Processing 05 Demonstrate the use of NLP tools and
techniques for text-based processing of
semantically
natural languages
410252(A)
Language Modelling:
Applications
Illustrations
Illustrate various language modelling 03 06 Develop real world NLP applications

techniques

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

Foundation course to understand Data


NLP –Science,
Unit I & IItaught in Engg, Science(I2IT,and Management, part of GATE
ep

ep
ep

D
F.O.P. on – Prof. Deptii Chaudhari Pune)
D
tii

tii
tii

ep

ep
ep
CO-PO Mapping

D
D

@The CO-PO Mapping Matrix

CO/PO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12

CO1 2 2 1 - - - - - - - - -
CO2 3 3 2 2 2 - - - - - - 1

ri
ri

r
ha

ha
ha

CO3 2 3 3 2 2 - - - - - - 2

ud

ud
ud

ha

ha
ha

C
C

CO4 2 2 3 3 3 - 2 2 - - - 3

tii

tii
tii

ep

ep
ep

2 2 3 3 3 - - - - - - 3

D
D

CO5
CO6 3 3 3 3 3 2 1 1 - - - 3

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Sample Questions

D
D

Question CO BTL
Differentiate between natural languages and programming
CO1 BTL 2
languages.
Explain the various types of ambiguities in natural languages. CO1 BTL 2

ri
ri

r
ha

ha
ha

Discuss the linguistic levels and stages in NLP CO1 BTL 2

ud

ud
ud

ha

ha
ha

C
C

Illustrate the working of FST for morphological analysis with an

tii

tii
tii

CO2 BTL 4

ep

ep
ep

example.

D
D

Select the best parsing tree. (Provide rules along with


CO2 BTL 4
probabilities)
Break down the given words into morphemes. CO2 BTL 4

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D
tii

tii
tii

ep

ep
ep
Teaching Methologies

D
D

Live Demos Case Studies Assignments Self Learning


Prepare small Based on problem Online Courses
Related to Indian solving NPTEL, Udemy
working or Reginal ex: creating parse

ri
ri

examples for each

r
ha

ha
ha

Languages trees or identifying

ud

ud
ud

concept

ha

ha
ha

morphemes

C
C

tii

tii
tii

ep

ep
ep

D
D

Research Beyond
Mini Projects
Papers Syllabus
Identify research Generative Models,
Covering all units
papers & ask Transformers for
students to read & NLP
present

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D
tii

tii
tii

ep

ep
ep
Learning Resources

D
D

Text Books:
1. Jurafsky, David, and James H. Martin, ―Speech and Language Processing: An Introduction to Natural
Language Processing‖, Computational Linguistics and Speech Recognition‖, , PEARSON Publication
2. Manning, Christopher D., and nrich Schütze , ―Foundations of Statistical Natural Language Processing‖,
Cambridge, MA: MIT Press

ri
ri

r
Reference Books:

ha

ha
ha

ud

ud
ud

1. Steven Bird, Ewan Klein, Edward Loper, ―Natural Language Processing with Python – Analyzing Text

ha

ha
ha

C
C

with the Natural Language Toolkit‖, O‘Reilly Publication

tii

tii
tii

ep

ep
ep

D
D

2. Dipanjan Sarkar , ―Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable
Insights from your Data‖, Apress Publication ISBN: 9781484223871
3. Alexander Clark, Chris Fox, and Shalom Lappin, ―The Handbook of Computational Linguistics and
Natural Language Processing‖, Wiley Blackwell Publications
4. Jacob Eisenstein, ―Natural Language Processing‖, MIT Press
5. Jacob Eisenstein, ―An Introduction to Information Retrieval‖, Cambridge University Press

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Unit I: Introduction to Natural Language Processing

D
D

Introduction:
✓ What is Natural Language Processing? Why NLP is hard?
✓ Programming languages Vs Natural Languages
✓ Are natural languages regular?
✓ Finite automata for NLP
✓ Stages of NLP

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

✓ Challenges and Issues(Open Problems) in NLP

C
C

tii

tii
tii

ep

ep
ep

Basics of text processing:

D
D

✓ Tokenization
✓ Stemming,
✓ Lemmatization,
✓ Part of Speech Tagging
Case Study: Why English is not a regular language

ri

ri
i
ar

ha

ha
Mapping to CO: CO1
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
What is Natural Language Processing?

D
D

Natural language processing is an area of research in computer science and


artificial intelligence (AI) concerned with processing natural languages such as
English or Spanish or Hindi or Marathi.

Natural language processing is a process of automating language analysis,

ri
ri

r
ha

ha
ha

ud

ud
ud

generation, acquisition.

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D

▪ Analysis (‘understanding’ or ‘processing’): Input is language, output


is some representation that supports useful action.
▪ Generation: Input is some representation, output is language.
▪ Acquisition: Obtaining the representation and necessary algorithms,
from knowledge and data

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
What is Natural Language Processing?

D
D

• Computers are using natural languages as input and / or output.

Natural Natural
Language Computer Language

ri
ri

r
ha

ha
ha

Natural Language

ud

ud
ud

ha

ha
ha

C
C

Understanding (NLU)

tii

tii
tii

ep

ep
ep

D
D

Natural Language
Generation (NLG)

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
What is Natural Language Processing?

D
D

• Computers are using natural languages as input and / or output.

Natural Language Natural Language


Understanding (NLU) NLP Generation (NLG)

ri
ri

r
ha

ha
ha

ud

ud
ud

• Take sentences as input and • Taking some representation

ha

ha
ha

C
C

interpret it. what you want to say and

tii

tii
tii

ep

ep
ep

• Map input into useful

D
working out a way to express it
D

interpretation. in natural language.

Analysis Analysis

Morphological
Syntactic Deep Planning
Semantic Syntactic Parsing

ri

ri
i
ar

ha

ha
dh

Discourse ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Core NLP Pipeline

D
D

Text Pre-processing Feature Engineering NLP Tasks


▪ Noise Removal ▪ Syntactic Parsing ▪ Text Classification
▪ Lexicon ▪ Part of Speech Tagging ▪ Text Matching/Similarity
Normalization ▪ Entity Extraction ▪ Coreference Resolution
▪ Object ▪ Statistical Features ▪ Temporal Sequencing
Standardization ▪ Word Embeddings (Text ▪ NLU

ri
ri

r
▪ NLG

ha

ha
ha

Vectors)

ud

ud
ud

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D

Core NLP
Processing Decision Making
Information Output
Input Data Deterministic
Management Consumption
+
Probabilistic
Data Storage

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Alternative Views of NLP

D
D

▪ Computational models of human language processing


▪Programs that operate internally the way humans do
▪ Computational models of human communication
▪Programs that interact like humans
▪ Computational systems that efficiently process text and speech

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

C
C

tii

tii
tii

ep
Goals of NLP

ep
ep

D
D

▪ Science Goal: Understand the way language operates


▪ Engineering Goal: Build systems that analyze and generate language;
reduce the man machine gap

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Levels of Linguistic Representation

D
D

Pragmatics

Discourse

Semantics

i
Syntax

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

C
C

tii

tii
tii

Lexemes

ep

ep
ep

D
D

Morphology

Phonology Orthography

Phonetics

ri

ri
i
ar

Speech ha
Text

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Stages of NLP

D
D

Process of converting a sequence of Process of looking for Uses a set of rules that describe
characters into a sequence of tokens meaning in a statement cooperative dialogues to help
you find the intended result

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D

Process for checking grammar, Deals with the effect of a


arranging words, and displaying previous sentence on the

ri

ri
i
ar

ha

ha
relationships between them. sentence in consideration.
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Why NLP is hard?

D
D

“At last a computer that understands you like your mother.”

1. It understands (that) you like your Ambiguity

ri
ri

r
ha

ha
ha

ud

ud
ud

mother.

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D

2. It understands you as well as it


understands your mother.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii Ambiguity at Many Levels

ep

ep
ep

D
D

➢ At acoustic level
➢ Homophones: words that sound similar but means different.
➢ E.g. I am going to buy an apple. “Apple” (Company) Vs. “apple” (Fruit)
➢ Word Boundary: ‘Aajayenge’ (Will come) Vs. ‘Aaj ayenge’ (Will come today)
➢ Phrase Boundary: Sentence “I got a plate” can be broken up in two different ways
➢ Either to mean “I got up late”, which means I woke up late
➢ Or “I got a plate”, I have a plate with me

ri
ri

➢ Disfluency: concerned with how a speaker intersperses his sentences with meaningless sounds just to be able

r
ha

ha
ha

ud

ud
ud

to organize his/her thought.

ha

ha
ha

C
C

➢ E.g.: ummm, ahhh, ahem etc

tii

tii
tii

ep

ep
ep

➢ At Lexical level

D
D

➢ Part of speech: Dog (as noun) Vs. Dog (as verb)


➢ Sense: Dog (as animal) Vs. Dog (as a very detestable person)
➢ At the syntactic level

Different structures
lead to different
interpretations.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii Ambiguity at Many Levels

ep

ep
ep

D
D

➢Structural
➢ “The camera man shot the man with the gun when he was near Tendulkar.”
➢ “Aid for kins of cops, killed in terrorist attacks”
➢At Semantic Level
➢ Word Sense Ambiguity
➢ They put money in the bank.

ri
ri

r
ha

ha
ha

➢ = buried in mud?

ud

ud
ud

ha

ha
ha

C
➢ I saw a boy with a telescope.
C

tii

tii
tii

ep

ep
ep

D
D

➢At Discourse (multi-clause) level


➢ Alice says they’ve built a computer that understands you like your mother
➢ But she . . .
➢ . . . doesn’t know any details . . .
➢ ….. doesn’t understand me at all

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Programming Languages Vs Natural Languages

D
D

Programming Languages Natural Languages


Simple and unambiguous Complex, open to interpretation and evolve
Intended to be translated into a finite set of Not intended to be translated into a finite set of
mathematical operations mathematical operations
Tells a machine exactly what to do using a compiler or No compilers or interpreters for natural languages
interpreter

ri
ri

r
ha

ha
ha

ud

ud
ud

None of these applies to programming languages. Communicate is both logical and emotional ways.

ha

ha
ha

C
C

Involves body language, intonation, volume, and

tii

tii
tii

ep

ep
ep

D
D

many other nonverbal clues.


Defined by the physical attributes of human bodies
(eyes, tongue, hands), and are for that reason unique to
humans.
Follow very strict set of rules, hence they can’t evolve Evolves over the period of time. E.g. Slangs used on
and develop the same way human languages do social media, new words are added to webster
(although we could say that programming languages dictionary every year
evolve through various libraries).

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Programming Languages Vs Natural Languages

D
D

Programming Languages Natural Languages


No room for errors or improvisation Human languages are full of imperfections
No variation or different aspects of same Multiple aspects of human languages: Dialects,
programming language slang, jargon, argot (secret language used by a
certain group that’s incomprehensible to outsiders),
namesake, accents, mispronounced words, typos,

ri
ri

r
ha

ha
ha

irregular punctuation

ud

ud
ud

ha

ha
ha

Don’t disrupt the message we’re trying to

C
C

tii

tii
tii

ep

ep
ep

communicate

D
D

Artificial creations: Rules and definitions were Natural creations, grammar changes as per context.
designed beforehand, which allows for them to be
fully described and studied in their entirety.
Self-defining grammar which doesn’t change
depending on the context.

Programming languages don’t really have Morphology is very important


morphology, at least not the same way human

ri

ri
i
ar

languages do
ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Challenges and Issues (Open Problems) in NLP

D
D

01 Context of words, phrases and 02 Large set of morphological


Homonyms variants

03 Irony, Sarcasm, Emotions, 04 Ambiguity at many levels

ri
ri

r
ha

ha
ha

Intentions

ud

ud
ud

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D

05 Errors in text, speech: 06 Colloquialisms and slang


Misspellings, Unstructured data e.g. gonna, wanna, granny, granma

07
Domain specific / Low Resource 08
Lack of data, benchmarks,
Languages standards

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep 22

Basics of Text Processing

D
D

• Process of reducing
the inflectional Lemmatization, A process that
Process of breaking words to their root unlike stemming attaches each
down a text into forms reduces the word in a
tokens or given • Maps the word to a inflected words sentence with a

ri
ri

r
ha

ha
ha

paragraph into a same stem even if properly ensuring suitable tag from

ud

ud
ud

ha

ha
ha

list of sentences or the stem is not a that the root word a given set of tags

C
C

words belongs to the

tii

tii
tii

valid word in the

ep

ep
ep

D
language
D

language

Part of Speech
Tokenization Stemming Lemmatization
Tagging

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Tokenization

D
D

Why Tokenize?
▪ Unstructured data and natural language text is broken into chunks of information that can be
understood by machine.
▪ Converts an unstructured string (text document) into a numerical data structure suitable for
machine learning. which allows the machines to understand each of the words by themselves,
as well as how they function in the larger text.

ri
ri

▪ First crucial step of the NLP process as it converts sentences into understandable bits of data

r
ha

ha
ha

ud

ud
ud

for the program to work with.

ha

ha
ha

C
C

tii

tii
tii

▪ Without a proper / correct tokenization, the NLP process can quickly devolve into a chaotic

ep

ep
ep

D
D

task.
Challenges
▪ Dealing with segment words when spaces or punctuation marks define the boundaries of the
word. For example: don’t
▪ Dealing with symbols that might change the meaning of the word significantly. For example:
₹100 vs 100
▪ Contractions such as ‘you’re’ and ‘I’m’
▪ Not applicable for symbol based languages like Chinese, Japanese, Korean Thai, Hindi, Urdu,
ri

ri
i
ar

ha

ha
dh

ud

ud
Tamil, and others
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Types of Tokenization

D
D

1. Word Tokenization
▪ Most common way of tokenization, uses natural breaks, like pauses in speech or spaces in text, and splits
the data into its respective words using delimiters (characters like ‘,’ or ‘;’ or ‘“,”’).
▪ Word tokenization’s accuracy is based on the vocabulary it is trained with. Unknown words or Out Of
Vocabulary (OOV) words cannot be tokenized.
2. White Space Tokenization
▪ Simplest technique, Uses white spaces as basis of splitting.

ri
ri

r
ha

ha
ha

▪ Works well for languages in which the white space breaks apart the sentence into meaningful words.

ud

ud
ud

ha

ha
ha

3. Rule Based Tokenization

C
C

tii

tii
tii

ep

ep
ep

▪ Uses a set of rules that are created for the specific problem.

D
D

▪ Rules are usually based on grammar for particular language or problem.


4. Regular Expression Tokenizer
▪ Type of Rule based tokenizer
▪ Uses regular expression to control the tokenization of text into tokens.
5. Penn Treebank Tokenizer
▪ Penn Treebank is a corpus maintained by the University of Pennsylvania containing over four
million and eight hundred thousand annotated words in it, all corrected by humans
▪ Uses regular expressions to tokenize text as in Penn Treebank
ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Stemming Vs Lemmatization

D
D

change change
changing changing
changes chang changes change

ri
ri

r
ha

ha
ha

changed

ud

ud
ud

changed

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

changer changer

D
D

Stemming Lemmatization
▪ Porter Stemming: Uses suffix ▪ Wordnet Lemmatization: Uses
stripping to produce stems WordNet database to lookup lemmas
▪ Lancaster Stemming: Works with a of the words.
table containing about 120 rules
indexed by the last letter of a suffix.
ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Parts of Speech Tagging

D
D

▪ Part of Speech tagging (or just tagging for short) is the process of assigning a
part of- speech or other syntactic class marker to each word in a corpus.
▪ Because tags are generally also applied to punctuation, tagging requires that
the punctuation marks (period, comma, etc) be separated off of the words.
▪ Thus tokenization is usually performed before, or as part of, the tagging

ri
ri

r
process, separating commas, quotation marks, etc., from words, and

ha

ha
ha

ud

ud
ud

ha

ha
ha

disambiguating end-of-sentence punctuation (period, question mark, etc) from

C
C

tii

tii
tii

ep

ep
ep

part-of-word punctuation (such as in abbreviations like e.g. and etc.)

D
D

▪ The input to a tagging algorithm is a string of words and a specified tagset of


the kind. The output is a single best tag for each word.
▪ Automatically assigning a tag to each word is not trivial because of the
ambiguity.
▪ For example:
▪ Book that flight OR Book that suspect. book --> verb.
ri

ri
i
ar

ha

ha
dh

ud

ud
▪ Hand me that book. book --> noun.
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Types of POS Taggers

D
D

Part of Speech Tagging


Algorithms

Rule-based Tagging Stochastic Tagging

ri
ri

r
ha

ha
ha

ud

ud
ud

▪ Involves a large database of hand-written ▪ Any model which somehow incorporates frequency or

ha

ha
ha

C
C

disambiguation rules probability may be properly labelled stochastic.

tii

tii
tii

ep

ep
ep

▪ Disambiguation is done by analyzing the ▪ The simplest stochastic taggers disambiguate words based

D
D

linguistic features of the word, its preceding solely on the probability that a word occurs with a
word, its following word, and other aspects. particular tag.
▪ Example of a rule: ▪ The problem with this approach is that while it may yield
▪ If an ambiguous/unknown word X is a valid tag for a given word, it can also yield inadmissible
preceded by a determiner and followed by a sequences of tags.
noun, tag it as an adjective. ▪ An alternative approach is to calculate the probability of a
▪ Example of Rule-based tagger is Brill’s given sequence of tags occurring known as n-gram
Tagger. approach, referring to the fact that the best tag for a given

ri

ri
i
ar

ha

ha
word is determined by the probability that it occurs with
dh

ud

ud
u

ha

ha
ha

the n previous tags.


C

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Hidden Markov Model Tagging

D
D

• Section 5.5 - HMM PART-OF-SPEECH TAGGING - Jurafsky, David, and James H. Martin, ―Speech
and Language Processing: An Introduction to Natural Language Processing , Computational
Linguistics and Speech Recognition

• https://www.freecodecamp.org/news/an-introduction-to-part-of-speech-tagging-and-the-hidden-
markov-model-953d45338f24/

ri
ri

r
ha

ha
ha

ud

ud
ud

• https://www.freecodecamp.org/news/a-deep-dive-into-part-of-speech-tagging-using-viterbi-

ha

ha
ha

C
C

algorithm-17c8de32e8bc

tii

tii
tii

ep

ep
ep

D
D

• https://medium.com/data-science-in-your-pocket/pos-tagging-using-hidden-markov-models-hmm-
viterbi-algorithm-in-nlp-mathematics-explained-d43ca89347c4

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Unit II: Language Syntax and Semantics

D
D

Morphological Analysis:
✓ What is Morphology?
✓ Types of Morphemes
✓ Inflectional morphology & Derivational morphology
✓ Morphological parsing with Finite State Transducers (FST)
Syntactic Analysis:
✓ Syntactic Representations of Natural Language,

ri
ri

r
ha

ha
ha

ud

ud
ud

✓ Parsing Algorithms,

ha

ha
ha

C
C

✓ Probabilistic context-free grammars, and Statistical parsing

tii

tii
tii

ep

ep
ep

D
Semantic Analysis:
D

✓ Lexical Semantic,
✓ Relations among lexemes & their senses –Homonymy, Polysemy, Synonymy, Hyponymy,
WordNet, Word Sense Disambiguation (WSD)
✓ Dictionary based approach
✓ Latent Semantic Analysis
Case Studies: Study of Stanford Parser and POS Tagger https://nlp.stanford.edu/software/lex-
parser.html, https://nlp.stanford.edu/software/tagger.html

ri

ri
i
ar

ha

ha
Mapping to CO: CO2
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Morphological Analysis

D
D

Morphology is the study of the structure and formation of words.

Morpheme is the important unit of morphology, which is defined as the "minimal unit of
meaning“ or “the minimal unit of grammatical analysis"

ri
ri

r
ha

ha
ha

ud

ud
▪ There are three morphemes, each carrying a
ud

morphemes

ha

ha
ha

C
C

certain amount of meaning. un means "not",

tii

tii
tii

ep

ep
ep

D
D

while ness means "being in a state or condition".


un happy ness ▪ Happy is a free morpheme because it can appear
on its own (as a "word" in its own right).
prefix suffix
▪ Bound morphemes have to be attached to a free
affixes stem morpheme, and so cannot be words in their own
right.
▪ Thus, you can't have sentences in English such as

ri

ri
i
ar

ha

ha
"Jason feels very un ness today".
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Types of Morphology

D
D

Inflection is the process of changing the form of a word


so that it expresses information such as number, person,
case, gender, tense, mood and aspect, but the syntactic
category of the word remains unchanged.
Inflectional Examples: car / cars , table / tables, dog / dogs

ri
ri

r
ha

ha
ha

ud

ud
ud

Creation of a new word from existing word by changing

ha

ha
ha

C
C

Morphology Derivational

tii

tii
grammatical category.
tii

ep

ep
ep

D
D

Example: happiness, brotherhood etc.

Cliticization Cliticization is the combination of a word stem with a clitic.


Clitic is a morpheme that acts syntactically like a word, but
is reduced in form and attached (phonologically and
sometimes orthographically) to another word.

ri

ri
i
ar

For example the English morpheme ’ve in the word I’ve is a


ha

ha
dh

ud

ud
u

clitic.
ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Morphological Parsing

D
D

In order to build a morphological parser, we’ll need at least the following:

1. lexicon: the list of stems and affixes, together with basic information
about them (whether a stem is a Noun stem or a Verb stem, etc.).

ri
ri

r
ha

ha
ha

2. morphotactics: the model of morpheme ordering that explains which

ud

ud
ud

ha

ha
ha

C
C

classes of morphemes can follow other classes of morphemes inside a word.

tii

tii
tii

ep

ep
ep

D
D

3. orthographic rules: these spelling rules are used to model the changes
that occur in a word, usually when two morphemes combine (e.g., the y→ie
spelling rule discussed above that changes city + -s to cities rather than
citys)

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Syntactic Analysis

D
D

▪ The study of grammar has an ancient lineage; Panini’s grammar


of Sanskrit was written over two thousand years ago, and is still
referenced today in teaching Sanskrit.
▪ The word syntax comes from the Greek s´yntaxis, meaning

ri
ri

r
ha

ha
“setting out together or arrangement”, and refers to the way
ha

ud

ud
ud

ha

ha
ha

C
C

words are arranged together.

tii

tii
tii

ep

ep
ep

D
D

▪ Syntax refers to the set of rules, principles, processes that


govern the structure of sentences in a natural language.
▪ Syntactic analysis, also referred to as syntax analysis or parsing,
is the process of analyzing natural language with the rules of a
formal grammar.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Notions of Syntax and Grammar

D
D

▪ Constituency
▪ Groups of words may behave as a single unit or phrase, called a
constituent.
▪ Example: On September seventeenth, I’d like to fly from Pune to Delhi.
▪ Grammatical relations

ri
ri

r
▪ A formalization of ideas from traditional grammar such as SUBJECTS

ha

ha
ha

ud

ud
ud

ha

ha
ha

and OBJECTS, and other related notions.

C
C

tii

tii
tii

ep

ep
ep

▪ Example: She ate a mammoth breakfast. Here noun phrase She is the

D
D

SUBJECT and a mammoth breakfast is the OBJECT.


▪ Subcategorization and dependency relations
▪ Refer to certain kinds of relations between words and phrases.
▪ For example: The verb want can be followed by an infinitive, as in I want
to fly to Delhi, or a noun phrase, as in I want a flight to Delhi. But the
verb find cannot be followed by an infinitive (*I found to fly to Delhi).
ri

ri
i
ar

ha

ha
dh

ud

ud
▪ These are called facts about the subcategorization of the verb.
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Context Free Grammars

D
D

▪ A context-free grammar consists of a set of rules or productions, each of


which expresses the ways that symbols of the language can be grouped and
ordered together, and a lexicon of words and symbols.
▪ The symbols that are used in a CFG are divided into two classes.
▪ The symbols that correspond to words in the language are called terminal

ri
ri

r
ha

ha
ha

symbols; the lexicon is the set of rules that introduce these terminal symbols.

ud

ud
ud

ha

ha
ha

C
C

▪ The symbols that express clusters or generalizations of these are called non-

tii

tii
tii

ep

ep
ep

D
D

terminals.
▪ In each context free rule, the item to the right of the arrow (→) is an ordered
list of one or more terminals and non-terminals, while to the left of the arrow
is a single non-terminal symbol expressing some cluster or generalization.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Context Free Grammars

D
D

▪ A context-free grammar G is defined by four parameters N, S, P, S (technically


“is a 4-tuple”):
▪ N a set of non-terminal symbols (or variables)
▪ ∑ a set of terminal symbols (disjoint from N)
▪ R a set of rules or productions, each of the form A→b , where A is a

ri
ri

r
ha

ha
ha

nonterminal, β is a string of symbols from the infinite set of

ud

ud
ud

ha

ha
ha

C
C

strings (∑ ∪ N)∗

tii

tii
tii

ep

ep
ep

D
D

▪ S a designated start symbol

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Context Free Grammars

D
D

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D

The lexicon for L0.


The grammar for L0
Example phrases for each rule

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Parsing Algorithms

D
D

▪ Top Down Parsing


▪ Goal driven searching
▪ Gives importance to textual precedence (rule precedence)
▪ Problems with Top Down Parsing
1. Only judges grammaticality.

ri
ri

2. Stops when it finds a single derivation.

r
ha

ha
ha

ud

ud
ud

3. No semantic knowledge employed.

ha

ha
ha

C
C

tii

tii
tii

4. No way to rank the derivations.

ep

ep
ep

D
D

5. Problems with left-recursive rules.


6. Problems with ungrammatical sentences.
▪ Bottom-up Parsing
▪ Starts with the words of input and tries to build trees from words up, again by
applying rules from the grammar one at a time.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
D D D
ep ep ep
tii tii tii
C C
ha ha
u dh ud
ar ha
i ri

D D D
ep ep ep
tii tii tii
C C
ha ha
ud ud
ha ha
ri r i
Parsing Algorithms - Ambiguity

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)

D D D
ep ep ep
tii tii tii
C C
ha ha
ud ud
ha ha
ri ri
tii

tii
tii

ep

ep
ep
Dependency Parsing

D
D

▪ Dependency Parsing is the process to analyze the grammatical structure in a


sentence and find out related words as well as the type of the relationship
between them.
▪ Each relationship:
▪ Has one head and a dependent that modifies the head.

ri
ri

r
ha

ha
ha

ud

ud
ud

▪ Is labeled according to the nature of the dependency between the head

ha

ha
ha

C
C

tii

tii
tii

and the dependent. These labels can be found at Universal Dependency

ep

ep
ep

D
D

Relations.
▪ Dependency parsing is the task of extracting a dependency parse of a
sentence that represents its grammatical structure and defines the
relationships between “head” words and words, which modify those heads.

▪ Read more about Dependency Parsing in Chapter 15 Speech and Language

ri

ri
i
ar

Processing. Daniel Jurafsky & James H. Martin ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Probabilistic Context Free Grammars

D
D

▪ The simplest augmentation of the context-free grammar is the Probabilistic


Context-Free Grammar (PCFG), also known as the Stochastic Context-Free
Grammar (SCFG
▪ A probabilistic context-free grammar augments each rule in P with a
conditional probability.

ri
ri

r
ha

ha
ha

▪ A PCFG is thus defined by the following components

ud

ud
ud

ha

ha
ha

C
C

▪ N a set of non-terminal symbols

tii

tii
tii

ep

ep
ep

D
D

▪ ∑ a set of terminal symbols


▪ R a set of rule production, each of the form A --> β where A is a non-
terminal, β is a string of symbols from the infinite set of strings (∑ ∪
N)* and p is a number between 0 and 1 expressing P(β |A)
▪ S a designated start symbol

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Probabilistic Context Free Grammars

D
D

▪ S → NP VP 0.1 ▪ DT → the 0.1


▪ NP → DT NN 0.5 ▪ NN → gunman 0.5
▪ NP → NNS 0.3 ▪ NN → building 0.5

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

▪ NP → NP PP 0.2 ▪ VBD → sprayed 1.0

C
C

tii

tii
tii

ep

ep
ep

D
D

▪ PP → P NP 1.0 ▪ NNS → bullets 1.0


▪ VP → VP PP 0.6
▪ VP → VBD NP 0.4

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Probabilistic Context Free Grammars

D
D

The gunman sprayed the building with bullets.


P(t1) =
S 1.0
1.0 * 0.5*1.0*0.5*0.6*0.4 *1.0 *0.5
Parse t1
*1.0*0.5*1.0*1.0*0.3 *0.1

NP 0.5 VP 0.6 =0.00225

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

C
DT 1.0 NN 0.5 VP 0.4

C
C

PP1.0

tii

tii
tii

ep

ep
ep

D
D

The gunman VBD NP0.5 P 1.0 NP0.3


1.0

sprayed DT 1.0 with


NN0.5
NNS1.0
the building

ri

ri
i
ar

bullets
ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Probabilistic Context Free Grammars

D
D

The gunman sprayed the building with bullets. P(t2) =


S 1.0 1.0 * 0.5* 1.0* 0.5*
Parse t2 0.4 * 1.0 * 0.2 * 0.5* 1.0
*0.5 *0.1 *0.1 *0.3 *1.0
NP 0.5 VP 0.4
=0.0015

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

C
DT 1.0 NN 0.5 VBD 1.0

C
C

NP0.2

tii

tii
tii

ep

ep
ep

D
D

The gunman sprayed NP 0.5 PP1.0

DT 1.0 NN0.5 P 1.0 NP0.3

the building NNS1.0


with

ri

ri
i
ar

ha

ha
dh

ud

ud
u

bullets
ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Lexical Semantic

D
D

▪ Lexeme is a pairing of a particular form (orthographic or phonological)


with its meaning, and a lexicon is a finite list of lexemes.
▪ A lemma or citation form is the grammatical form that is used to represent
a lexeme. Example: The lemma or citation form for sing, sang, sung is sing.
▪ The process of mapping from a wordform to a lemma is called

ri
ri

r
ha

ha
ha

ud

ud
ud

lemmatization.

ha

ha
ha

C
C

tii

tii
tii

▪ Lexical Semantics is the relationship of lexical meaning to sentence meaning

ep

ep
ep

D
D

and syntax.
▪ Lexical semantics is concerned with the intrinsic characteristics of word
meaning, semantic relationships between words, and how word meaning is
related to syntactic structure.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Relations between lexical items

D
D

▪ Hyponymy and hypernymy


▪ Hyponymy and hypernymy refers to a relationship between a general
term and the more specific terms that fall under the category of the
general term.
▪ For example: the colors red, green, blue and yellow are hyponyms. They

ri
ri

r
ha

ha
ha

ud

ud
ud

fall under the general term of color, which is the hypernym.

ha

ha
ha

C
C

tii

tii
tii

▪ Synonymy refers to the words that are pronounced and spelled differently

ep

ep
ep

D
D

but contain the same meaning.


▪ When the meaning of two senses of two different words (lemmas) are
identical or nearly identical we say the two senses are synonyms.
▪ Synonyms include such pairs as: couch/sofa , car/automobile
▪ Antonyms, by contrast, are words with opposite meaning such as the
following: long/short, big/little, fast/slow, cold/hot, dark/light

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
Relations between lexical items

D
D

▪ Homonymy refers to the relationship between words that are spelled or


pronounced the same way but hold different meanings.
▪ For Example: bank (of river) , bank (financial institution)
▪ Polysemy refers to a word having two or more related meanings.
▪ For Example: bright (shining), bright (intelligent)

ri
ri

r
ha

ha
ha

ud

ud
ud

▪ Lexical semantics also explores whether the meaning of a lexical unit is

ha

ha
ha

C
C

tii

tii
tii

established by looking at its neighborhood in the semantic net, (words it

ep

ep
ep

D
D

occurs with in natural sentences), or whether the meaning is already locally


contained in the lexical unit.
▪ In English, WordNet is an example of a semantic network.
▪ It contains English words that are grouped into synsets. Some semantic
relations between these synsets are meronymy, hyponymy, synonymy, and
antonymy.

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


D

D
D
tii

tii
tii

ep

ep
ep
NLP Online Courses

D
D

https://lenavoita.github.io/nlp_course/word_embeddings.html

https://www.fast.ai/posts/2019-07-08-fastai-nlp.html

ri
ri

r
ha

ha
ha

ud

ud
ud

ha

ha
ha

https://www.udemy.com/course/natural-language-processing-with-

C
C

tii

tii
tii

ep

ep
ep

bert/

D
D

ri

ri
i
ar

ha

ha
dh

ud

ud
u

ha

ha
ha

C
C

tii

tii
tii

ep

ep
ep

D
D
D D D
ep ep ep
tii tii tii
C C
ha ha
u dh ud
ar ha
i ri

D D D
ep ep ep
tii tii tii
C C
ha ha
ud ud
ha ha
ri r i
Write to Me..

F.O.P. on NLP – Unit I & II – Prof. Deptii Chaudhari (I2IT, Pune)


[email protected]

D D D
ep ep ep
tii tii tii
C C
ha ha
ud ud
ha ha
ri ri
D D D
ep ep ep
tii tii tii
C C
ha ha
u dh ud
ar ha
i ri

D D D
ep ep ep
tii tii tii
C C
ha ha
ud ud
ha ha
ri r i

D D D
ep ep ep
tii tii tii
C C
ha ha
ud ud
ha ha
ri ri

You might also like