0% found this document useful (0 votes)
57 views36 pages

Natural Language Processing With Deep Learning CS224N/Ling284

The document provides an overview of the first lecture in the CS224N/Ling284: Natural Language Processing with Deep Learning course. The lecture covers introductions to the course, human language and word meaning, an introduction to Word2vec, and an overview of the Word2vec objective function and optimization basics. The lecture plan includes discussing the course logistics, an understanding of representing word meaning, introducing distributed representations of words using Word2vec, and examining word vectors.

Uploaded by

suman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views36 pages

Natural Language Processing With Deep Learning CS224N/Ling284

The document provides an overview of the first lecture in the CS224N/Ling284: Natural Language Processing with Deep Learning course. The lecture covers introductions to the course, human language and word meaning, an introduction to Word2vec, and an overview of the Word2vec objective function and optimization basics. The lecture plan includes discussing the course logistics, an understanding of representing word meaning, introducing distributed representations of words using Word2vec, and examining word vectors.

Uploaded by

suman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Natural Language Processing

with Deep Learning


CS224N/Ling284

Christopher Manning
Lecture 1: Introduction and Word Vectors
Lecture Plan
Lecture 1: Introduction and Word Vectors
1. The course (10 mins)
2. Human language and word meaning (15 mins)
3. Word2vec introduction (15 mins)
4. Word2vec objective function gradients (25 mins)
5. Optimization basics (5 mins)
6. Looking at word vectors (10 mins or less)

2
Course logistics in brief
• Instructor: Christopher Manning
• Head TA: Matt Lamm • Coordinator: Amelie Byun
• TAs: Many wonderful people! See website
• Time: TuTh 4:30–5:50, Nvidia Aud (à video)

• Other information: see the class webpage:


• http://cs224n.stanford.edu/
a.k.a., http://www.stanford.edu/class/cs224n/
• Syllabus, office hours, “handouts”, TAs, Piazza
• Office hours started this morning!
• Python/numpy tutorial: office hour Fri 2:30 in 160-124

3
• Slides uploaded before each lecture
What do we hope to teach?
1. An understanding of the effective modern methods for deep
learning
• Basics first, then key methods used in NLP: Recurrent
networks, attention, transformers, etc.
2. A big picture understanding of human languages and the
difficulties in understanding and producing them
3. An understanding of and ability to build systems (in PyTorch)
for some of the major problems in NLP:
• Word meaning, dependency parsing, machine translation,
question answering

4
Course work and grading policy
• 5 x 1-week Assignments: 6% + 4 x 12%: 54%
• HW1 is released today! Due next Tuesday! At 4:30 p.m.
• Please use @stanford.edu email for your Gradescope account
• Final Default or Custom Course Project (1–3 people): 43%
• Project proposal: 5%, milestone: 5%, poster: 3%, report: 30%
• Final poster session attendance expected! (See website.)
• Wed Mar 20, 5pm-10pm (put it in your calendar!)
• Participation: 3%
• (Guest) lecture attendance, Piazza, evals, karma – see website!
• Late day policy
• 6 free late days; afterwards, 1% off course grade per day late
• Assignments not accepted after 3 late days per assignment
• Collaboration policy: Read the website and the Honor Code!
5
Understand allowed ‘collaboration’ and how to document it
High-Level Plan for Problem Sets
• HW1 is hopefully an easy on ramp – an IPython Notebook
• HW2 is pure Python (numpy) but expects you to do
(multivariate) calculus so you really understand the basics
• HW3 introduces PyTorch
• HW4 and HW5 use PyTorch on a GPU (Microsoft Azure)
• Libraries like PyTorch and Tensorflow are becoming the
standard tools of DL
• For FP, you either
• Do the default project, which is SQuAD question answering
• Open-ended but an easier start; a good choice for many
• Propose a custom final project, which we approve
• You will receive feedback from a mentor (TA/prof/postdoc/PhD)
• Can work in teams of 1–3; can use any language
6
Lecture Plan
1. The course (10 mins)
2. Human language and word meaning (15 mins)
3. Word2vec introduction (15 mins)
4. Word2vec objective function gradients (25 mins)
5. Optimization basics (5 mins)
6. Looking at word vectors (10 mins or less)

7
https://xkcd.com/1576/ Randall Munroe CC BY NC 2.5
How do we represent the meaning of a word?

Definition: meaning (Webster dictionary)


• the idea that is represented by a word, phrase, etc.
• the idea that a person wants to express by using
words, signs, etc.
• the idea that is expressed in a work of writing, art, etc.

Commonest linguistic way of thinking of meaning:


signifier (symbol) ⟺ signified (idea or thing)
10
= denotational semantics
How do we have usable meaning in a computer?
Common solution: Use e.g. WordNet, a thesaurus containing lists
of synonym sets and hypernyms (“is a” relationships).
e.g. synonym sets containing “good”: e.g. hypernyms of “panda”:
from nltk.corpus import wordnet as wn from nltk.corpus import wordnet as wn
poses = { 'n':'noun', 'v':'verb', 's':'adj (s)', 'a':'adj', 'r':'adv'}
for synset in wn.synsets("good"): panda = wn.synset("panda.n.01")
print("{}: {}".format(poses[synset.pos()], hyper = lambda s: s.hypernyms()
", ".join([l.name() for l in synset.lemmas()]))) list(panda.closure(hyper))

noun: good
[Synset('procyonid.n.01'),
noun: good, goodness
Synset('carnivore.n.01'),
noun: good, goodness
noun: commodity, trade_good, good Synset('placental.n.01'),
Synset('mammal.n.01'),
adj: good
Synset('vertebrate.n.01'),
adj (sat): full, good
Synset('chordate.n.01'),
adj: good
Synset('animal.n.01'),
adj (sat): estimable, good, honorable, respectable
adj (sat): beneficial, good Synset('organism.n.01'),
Synset('living_thing.n.01'),
adj (sat): good
Synset('whole.n.02'),
adj (sat): good, just, upright
Synset('object.n.01'),

Synset('physical_entity.n.01'),
adverb: well, good
adverb: thoroughly, soundly, good Synset('entity.n.01')]

11
Problems with resources like WordNet

• Great as a resource but missing nuance


• e.g. “proficient” is listed as a synonym for “good”.
This is only correct in some contexts.

• Missing new meanings of words


• e.g., wicked, badass, nifty, wizard, genius, ninja, bombest
• Impossible to keep up-to-date!

• Subjective
• Requires human labor to create and adapt
• Can’t compute accurate word similarity à
12
Representing words as discrete symbols

In traditional NLP, we regard words as discrete symbols:


hotel, conference, motel – a localist representation

Means one 1, the rest 0s

Words can be represented by one-hot vectors:


motel = [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]
hotel = [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]

Vector dimension = number of words in vocabulary (e.g., 500,000)

13
Sec. 9.2.2

Problem with words as discrete symbols


Example: in web search, if user searches for “Seattle motel”, we
would like to match documents containing “Seattle hotel”.

But:
motel = [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]
hotel = [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
These two vectors are orthogonal.
There is no natural notion of similarity for one-hot vectors!

Solution:
• Could try to rely on WordNet’s list of synonyms to get similarity?
• But it is well-known to fail badly: incompleteness, etc.
•14 Instead: learn to encode similarity in the vectors themselves
Representing words by their context

• Distributional semantics: A word’s meaning is given


by the words that frequently appear close-by
• “You shall know a word by the company it keeps” (J. R. Firth 1957: 11)
• One of the most successful ideas of modern statistical NLP!

• When a word w appears in a text, its context is the set of words


that appear nearby (within a fixed-size window).
• Use the many contexts of w to build up a representation of w

…government debt problems turning into banking crises as happened in 2009…


…saying that Europe needs unified banking regulation to replace the hodgepodge…
…India has just given its banking system a shot in the arm…

15 These context words will represent banking


Word vectors

We will build a dense vector for each word, chosen so that it is


similar to vectors of words that appear in similar contexts

0.286
0.792
−0.177
−0.107
banking = 0.109
−0.542
0.349
0.271

Note: word vectors are sometimes called word embeddings or


word representations. They are a distributed representation.
16
Word meaning as a neural word vector – visualization

0.286
0.792
−0.177
−0.107
expect = 0.109
−0.542
0.349
0.271
0.487

17
3. Word2vec: Overview
Word2vec (Mikolov et al. 2013) is a framework for learning
word vectors

Idea:
• We have a large corpus of text
• Every word in a fixed vocabulary is represented by a vector
• Go through each position t in the text, which has a center word
c and context (“outside”) words o
• Use the similarity of the word vectors for c and o to calculate
the probability of o given c (or vice versa)
• Keep adjusting the word vectors to maximize this probability

18
Word2Vec Overview
• Example windows and process for computing 𝑃 𝑤89: | 𝑤8

𝑃 𝑤8>= | 𝑤8 𝑃 𝑤89= | 𝑤8

𝑃 𝑤8>< | 𝑤8 𝑃 𝑤89< | 𝑤8

… problems turning into banking crises as …

outside context words center word outside context words


in window of size 2 at position t in window of size 2

19
Word2Vec Overview
• Example windows and process for computing 𝑃 𝑤89: | 𝑤8

𝑃 𝑤8>= | 𝑤8 𝑃 𝑤89= | 𝑤8

𝑃 𝑤8>< | 𝑤8 𝑃 𝑤89< | 𝑤8

… problems turning into banking crises as …

outside context words center word outside context words


in window of size 2 at position t in window of size 2

20
Word2vec: objective function
For each position 𝑡 = 1, … , 𝑇, predict context words within a
window of fixed size m, given center word 𝑤: .
I

Likelihood = 𝐿 𝜃 = G G 𝑃 𝑤89: | 𝑤8 ; 𝜃
8H< >JK:KJ
:LM
𝜃 is all variables
to be optimized
sometimes called cost or loss function

The objective function 𝐽 𝜃 is the (average) negative log likelihood:


I
1 1
𝐽 𝜃 = − log 𝐿(𝜃) = − S S log 𝑃 𝑤89: | 𝑤8 ; 𝜃
𝑇 𝑇
8H< >JK:KJ
:LM

Minimizing objective function ⟺ Maximizing predictive accuracy


21
Word2vec: objective function

• We want to minimize the objective function:


I
1
𝐽 𝜃 =− S S log 𝑃 𝑤89: | 𝑤8 ; 𝜃
𝑇
8H< >JK:KJ
:LM

• Question: How to calculate 𝑃 𝑤89: | 𝑤8 ; 𝜃 ?

• Answer: We will use two vectors per word w:


• 𝑣U when w is a center word
• 𝑢U when w is a context word

• Then for a center word c and a context word o:

exp(𝑢YI 𝑣Z )
𝑃 𝑜𝑐 = I𝑣 )
22
∑U∈] exp(𝑢U Z
Word2Vec Overview with Vectors
• Example windows and process for computing 𝑃 𝑤89: | 𝑤8
• 𝑃 𝑢^_Y`abJc | 𝑣de8Y short for P 𝑝𝑟𝑜𝑏𝑙𝑒𝑚𝑠 | 𝑖𝑛𝑡𝑜 ; 𝑢^_Y`abJc , 𝑣de8Y , 𝜃

𝑃 𝑢^_Y`abJc | 𝑣de8Y 𝑃 𝑢Z_dcdc |𝑣de8Y

𝑃 𝑢8seder | 𝑣de8Y 𝑃 𝑢`peqder |𝑣de8Y

… problems turning into banking crises as …

outside context words center word outside context words


in window of size 2 at position t in window of size 2

23
Word2vec: prediction function
② Exponentiation makes anything positive
① Dot product compares similarity of o and c.
𝑢I 𝑣 = 𝑢. 𝑣 = ∑edH< 𝑢d 𝑣d
exp(𝑢YI 𝑣Z ) Larger dot product = larger probability
𝑃 𝑜𝑐 = I𝑣 )
∑U∈] exp(𝑢U Z
③ Normalize over entire vocabulary
to give probability distribution

• This is an example of the softmax function ℝe → (0,1)e


exp(𝑥d ) Open
softmax 𝑥d = e = 𝑝d region
∑:H< exp(𝑥: )
• The softmax function maps arbitrary values 𝑥d to a probability
distribution 𝑝d
• “max” because amplifies probability of largest 𝑥d
• “soft” because still assigns some probability to smaller 𝑥d
• Frequently used in Deep Learning
24
Training a model by optimizing parameters
To train a model, we adjust parameters to minimize a loss
E.g., below, for a simple convex function over two parameters
Contour lines show levels of objective function

25
To train the model: Compute all vector gradients!
• Recall: 𝜃 represents all model parameters, in one long vector
• In our case with d-dimensional vectors and V-many words:

• Remember: every word has two vectors


• We optimize these parameters by walking down the gradient
26
4. Word2vec derivations of gradient

• Whiteboard – see video if you’re not in class ;)


• The basic Lego piece
• Useful basics:
• If in doubt: write out with indices

• Chain rule! If y = f(u) and u = g(x), i.e. y = f(g(x)), then:

27
Chain Rule

• Chain rule! If y = f(u) and u = g(x), i.e. y = f(g(x)), then:

• Simple example:

𝑑𝑦
= 20(𝑥 | + 7)|. 3𝑥 =
𝑑𝑥
28
Interactive Whiteboard Session!

Let’s derive gradient for center word together


For one example window and one example outside word:

exp(𝑢Y I 𝑣Z )
log 𝑝 𝑜 𝑐 = log ]
∑UH< exp(𝑢U I 𝑣Z )

You then also need the gradient for context words (it’s similar;
left for homework). That’s all of the parameters 𝜃 here.
29
Calculating all gradients!
• We went through gradient for each center vector v in a window
• We also need gradients for outside vectors u
• Derive at home!
• Generally in each window we will compute updates for all
parameters that are being used in that window. For example:

𝑃 𝑢8s_eder |𝑣`peqder 𝑃 𝑢pc |𝑣`peqder

𝑃 𝑢de8Y | 𝑣`peqder 𝑃 𝑢Z_dcbc |𝑣`peqder

… problems turning into banking crises as …

outside context words center word outside context words


in window of size 2 at position t in window of size 2
30
Word2vec: More details
Why two vectors? à Easier optimization. Average both at the end.

Two model variants:


1. Skip-grams (SG)
Predict context (”outside”) words (position independent) given center
word
2. Continuous Bag of Words (CBOW)
Predict center word from (bag of) context words
This lecture so far: Skip-gram model

Additional efficiency in training:


1. Negative sampling
So far: Focus on naïve softmax (simpler training method)
31
5. Optimization: Gradient Descent
• We have a cost function 𝐽 𝜃 we want to minimize
• Gradient Descent is an algorithm to minimize 𝐽 𝜃
• Idea: for current value of 𝜃, calculate gradient of 𝐽 𝜃 , then take
small step in direction of negative gradient. Repeat.

Note: Our
objectives
may not
be convex
like this :(

32
Gradient Descent
• Update equation (in matrix notation):

𝛼 = step size or learning rate

• Update equation (for single parameter):

• Algorithm:

33
Stochastic Gradient Descent
• Problem: 𝐽 𝜃 is a function of all windows in the corpus
(potentially billions!)
• So is very expensive to compute
• You would wait a very long time before making a single update!

• Very bad idea for pretty much all neural nets!


• Solution: Stochastic gradient descent (SGD)
• Repeatedly sample windows, and update after each one
• Algorithm:

34
Lecture Plan
1. The course (10 mins)
2. Human language and word meaning (15 mins)
3. Word2vec introduction (15 mins)
4. Word2vec objective function gradients (25 mins)
5. Optimization basics (5 mins)
6. Looking at word vectors (10 mins or less)

35
36

You might also like