Lecture 01
Lecture 01
Lecture 01
600
400
200
0
2012 2013 2014 2015 2016
Why AI?
Course logistics
Optimization
Reading comprehension
Image generation
Predicting poverty
Open-domain dialogue
A: How old are you?
B: I’m 16. Why are you asking?
A: I thought you were 12.
B: What made you think so?
A: I don’t know what you are talking about.
B: You don’t know what you are saying.
Adversarial examples
AlexNet predicts correctly on the left
Bias
33% men in training set, only predict 16% men at test time
Why AI?
Course logistics
Optimization
Modeling
Inference Learning
Real world
Modeling
6 7
4
5
5 5 3 1
8 6 3
Model 8
0
8 1 1
7 2
7 2 3 6
4
8
6
6 7
4
5
5 5 3 1
8 6 3
Model 8
0
8 1 1
7 2
7 2 3 6
4
8
6
Inference
6 7
4
5
5 5 3 1
8 3
6
Predictions 8
0
8 1 1
7 2
7 2 3 6
4
8
6
? ?
?
?
? ? ? ?
? ? ?
Model without parameters ?
?
? ? ?
? ?
? ? ? ?
?
?
?
+data
Learning
6 7
4
5
5 5 3 1
8 6 3
Model with parameters 8
0
8 1 1
7 2
7 2 3 6
4
8
6
Machine learning
Data Model
Reflex
”Low-level intelligence” ”High-level intelligence”
Machine learning
Search problems
Markov decision processes
Adversarial games
Reflex States
”Low-level intelligence” ”High-level intelligence”
Machine learning
White to move
Applications:
• Games: Chess, Go, Pac-Man, Starcraft, etc.
• Robotics: motion planning
• Natural language generation: machine translation, image caption-
ing
[demo]
search problem
adversarial game
Search problems
Markov decision processes Constraint satisfaction problems
Adversarial games Bayesian networks
Machine learning
Goal: put digits in blank squares so each row, column, and 3x3 sub-block
has digits 1–9
X1 X2
X3 X4
H1 H2 H3 H4 H5
E1 E2 E3 E4 E5
Search problems
Markov decision processes Constraint satisfaction problems
Adversarial games Bayesian networks
Machine learning
Search problems
Markov decision processes Constraint satisfaction problems
Adversarial games Bayesian networks
Machine learning
Why AI?
Course logistics
Optimization
• Exam (20%)
• Project (20%)
Why AI?
Course logistics
Optimization
min Distance(p)
p∈Paths
min TrainingError(w)
w∈Rd
Examples:
”cat”, ”cat” ⇒ 0
”cat”, ”dog” ⇒ 3
”cat”, ”at” ⇒ 1
”cat”, ”cats” ⇒ 1
”a cat!”, ”the cats!” ⇒ 4
[live solution]
• Once you have the recurrence, you can code it up. The straightforward implementation will take exponential
time, but you can memoize the results to make it O(n2 ) time. The end result is the dynamic programming
solution: recurrence + memoization.
Problem: finding the least squares line
Examples:
{(2, 4)} ⇒ 2
{(2, 4), (4, 2)} ⇒ ?
[live solution]