0% found this document useful (0 votes)
11 views45 pages

Lec 11

The document discusses reinforcement learning techniques including model-free passive reinforcement learning using Q-learning and temporal difference learning, as well as active reinforcement learning which involves an agent deciding how to collect experiences by balancing exploration versus exploitation. Approximate reinforcement learning is also covered to handle large state spaces.

Uploaded by

daliYop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views45 pages

Lec 11

The document discusses reinforcement learning techniques including model-free passive reinforcement learning using Q-learning and temporal difference learning, as well as active reinforcement learning which involves an agent deciding how to collect experiences by balancing exploration versus exploitation. Approximate reinforcement learning is also covered to handle large state spaces.

Uploaded by

daliYop
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

CS 188: Artificial Intelligence

Reinforcement Learning II

Instructor: Pieter Abbeel, University of California, Berkeley


[These slides were created by Dan Klein, Pieter Abbeel, and Anca Dragan. http://ai.berkeley.edu.]
Reinforcement Learning
o We still assume an MDP:
o A set of states s Î S
o A set of actions (per state) A
o A model T(s, a, s’) = P(s’ | s, a)
o A reward function R(s,a,s’)
o Still looking for a policy p(s)

o New twist: don’t know T or R, so must try out actions

o Big idea: Compute all averages over T using sample outcomes


Reinforcement Learning -- Overview
o Passive Reinforcement Learning (= how to learn from experiences)
o Model-based Passive RL
o Learn the MDP model from experiences, then solve the MDP
o Model-free Passive RL
o Forego learning the MDP model, directly learn V or Q:
o Value learning – learns value of a fixed policy; 2 approaches: Direct Evaluation & TD Learning
o Q learning – learns Q values of the optimal policy (uses a Q version of TD Learning)

o Active Reinforcement Learning (= agent also needs to decide how to


collect experiences)
o Key challenges:
o How to efficiently explore?
o How to trade off exploration <> exploitation
o Applies to both model-based and model-free. In CS188 we’ll cover only in
context of Q-learning
Model-Free Learning
s
o Model-free (temporal difference) learning a
o Experience world through episodes s, a
r
s’
o Update estimates each transition a’
s’, a’
o Over time, updates will mimic Bellman
updates s’’
Q-Learning
o Q-value iteration

o Q-Learning: learn Q(s,a) values as you go


o Receive a sample (s,a,s’,r)
o Consider your old estimate:
o Consider your new sample estimate:

o Incorporate the new estimate into a running average:

[Demo: Q-learning – gridworld (L10D2)]


[Demo: Q-learning – crawler (L10D3)]
Video of Demo Q-Learning -- Gridworld
Video of Demo Q-Learning -- Crawler
Q-Learning Properties
o Amazing result: Q-learning converges to optimal policy --
even if you’re acting suboptimally!

o This is called off-policy learning

o Caveats:
o You have to explore enough
o You have to eventually make the learning rate
small enough
o … but not decrease it too quickly
o Basically, in the limit, it doesn’t matter how you select actions (!)

[Demo: Q-learning – cliff grid (L11D2)]


Video of Demo Q-learning – Manual Exploration – Bridge
Grid
Reinforcement Learning -- Overview
o Passive Reinforcement Learning (= how to learn from experiences)
o Model-based Passive RL
o Learn the MDP model from experiences, then solve the MDP
o Model-free Passive RL
o Forego learning the MDP model, directly learn V or Q:
o Value learning – learns value of a fixed policy; 2 approaches: Direct Evaluation & TD Learning
o Q learning – learns Q values of the optimal policy (uses a Q version of TD Learning)

o Active Reinforcement Learning (= agent also needs to decide how


to collect experiences)
o Key challenges:
o How to efficiently explore?
o How to trade off exploration <> exploitation
o Applies to both model-based and model-free. In CS188 we’ll cover only in
context of Q-learning
Active Reinforcement Learning
Active Reinforcement Learning
o Full reinforcement learning: optimal policies (like value
iteration)
o You don’t know the transitions T(s,a,s’)
o You don’t know the rewards R(s,a,s’)
o You choose the actions now
o Goal: learn the optimal policy / values

o In this case:
o Learner makes choices!
o Fundamental tradeoff: exploration vs. exploitation
o This is NOT offline planning! You actually take actions in the world
and find out what happens…
Exploration vs. Exploitation
How to Explore?
o Several schemes for forcing exploration
o Simplest: random actions (e-greedy)
o Every time step, flip a coin
o With (small) probability e, act randomly
o With (large) probability 1-e, act on current policy

o Problems with random actions?


o You do eventually explore the space, but keep
thrashing around once learning is done
o One solution: lower e over time
o Another solution: exploration functions

[Demo: Q-learning – epsilon-greedy -- crawler (L10D3)]


Video of Demo Q-learning – Epsilon-Greedy – Crawler
Exploration Functions
o When to explore?
o Random actions: explore a fixed amount
o Better idea: explore areas whose badness is not
(yet) established, eventually stop exploring

o Exploration function
o Takes a value estimate u and a visit count n, and
returns an optimistic utility, e.g.
Regular Q-Update:
Modified Q-Update:

o Note: this propagates the “bonus” back to states that lead to unknown states
as well! [Demo: exploration – Q-learning – crawler – exploration function (L10D4)]
Video of Demo Q-learning – Exploration Function –
Crawler
Regret
o Even if you learn the optimal
policy, you still make mistakes
along the way!
o Regret is a measure of your total
mistake cost: the difference
between your (expected) rewards,
including youthful suboptimality,
and optimal (expected) rewards
o Minimizing regret goes beyond
learning to be optimal – it requires
optimally learning to be optimal
o Example: random exploration and
exploration functions both end up
optimal, but random exploration
has higher regret
Reinforcement Learning -- Overview
o Passive Reinforcement Learning (= how to learn from experiences)
o Model-based Passive RL
o Learn the MDP model from experiences, then solve the MDP
o Model-free Passive RL
o Forego learning the MDP model, directly learn V or Q:
o Value learning – learns value of a fixed policy; 2 approaches: Direct Evaluation & TD Learning
o Q learning – learns Q values of the optimal policy (uses a Q version of TD Learning)

o Active Reinforcement Learning (= agent also needs to decide how to


collect experiences)
o Key challenges:
o How to efficiently explore?
o How to trade off exploration <> exploitation
o Applies to both model-based and model-free. In CS188 we’ll cover only in
context of Q-learning
Reinforcement Learning -- Overview
o Passive Reinforcement Learning (= how to learn from experiences)
o Model-based Passive RL
o Learn the MDP model from experiences, then solve the MDP
o Model-free Passive RL
o Forego learning the MDP model, directly learn V or Q:
o Value learning – learns value of a fixed policy; 2 approaches: Direct Evaluation & TD Learning
o Q learning – learns Q values of the optimal policy (uses a Q version of TD Learning)
o Active Reinforcement Learning (= agent also needs to decide how to collect experiences)
o Key challenges:
o How to efficiently explore?
o How to trade off exploration <> exploitation
o Applies to both model-based and model-free. In CS188 we’ll cover only in context of Q-learning
Reinforcement Learning -- Overview
o Passive Reinforcement Learning (= how to learn from experiences)
o Model-based Passive RL
o Learn the MDP model from experiences, then solve the MDP
o Model-free Passive RL
o Forego learning the MDP model, directly learn V or Q:
o Value learning – learns value of a fixed policy; 2 approaches: Direct Evaluation & TD Learning
o Q learning – learns Q values of the optimal policy (uses a Q version of TD Learning)
o Active Reinforcement Learning (= agent also needs to decide how to collect experiences)
o Key challenges:
o How to efficiently explore?
o How to trade off exploration <> exploitation
o Applies to both model-based and model-free. In CS188 we’ll cover only in context of Q-learning
o Approximate Reinforcement Learning (= to handle large state spaces)
o Approximate Q-Learning
o Policy Search
Approximate Q-Learning
Generalizing Across States
o Basic Q-Learning keeps a table of all q-values

o In realistic situations, we cannot possibly learn


about every single state!
o Too many states to visit them all in training
o Too many states to hold the q-tables in memory

o Instead, we want to generalize:


o Learn about some small number of training states
from experience
o Generalize that experience to new, similar situations
o This is a fundamental idea in machine learning, and
we’ll see it over and over again
Example: Pacman
Let’s say we discover In naïve q-learning, Or even this one!
through experience we know nothing
that this state is bad: about this state:

[Demo: Q-learning – pacman – tiny – watch all (L11D4)]


[Demo: Q-learning – pacman – tiny – silent train (L11D6)]
[Demo: Q-learning – pacman – tricky – watch all (L11D5)]
Video of Demo Q-Learning Pacman – Tiny – Watch
All
Video of Demo Q-Learning Pacman – Tiny – Silent
Train
Video of Demo Q-Learning Pacman – Tricky – Watch
All
Feature-Based Representations

o Solution: describe a state using a vector of


features (properties)
o Features are functions from states to real numbers
(often 0/1) that capture important properties of the
state
o Example features:
o Distance to closest ghost
o Distance to closest dot
o Number of ghosts
o 1 / (dist to dot)2
o Is Pacman in a tunnel? (0/1)
o …… etc.
o Is it the exact state on this slide?
o Can also describe a q-state (s, a) with features (e.g.
action moves closer to food)
Linear Value Functions

o Using a feature representation, we can write a q function (or value function)


for any state using a few weights:

o Advantage: our experience is summed up in a few powerful numbers

o Disadvantage: states may share features but actually be very different in


value!
Approximate Q-Learning

o Q-learning with linear Q-functions:

Exact Q’s

Approximate Q’s

o Intuitive interpretation:
o Adjust weights of active features
o E.g., if something unexpectedly bad happens, blame the features that were
on: disprefer all states with that state’s features

o Formal justification: online least squares


Example: Q-Pacman

[Demo: approximate Q-
learning pacman (L11D8)]
Video of Demo Approximate Q-Learning --
Pacman
DeepMind Atari (©Two Minute Lectures)
approximate Q-learning with neural nets

35
Q-Learning and Least Squares
Linear Approximation: Regression
40

26

24
20
22

20

30
40
0 20
30
0 20
10 20
10
0 0

Prediction: Prediction:
Optimization: Least Squares

Error or “residual”
Observation

Prediction

0
0 20
Minimizing Error
Imagine we had only one point x, with features f(x), target value y, and weights w:

Approximate q update explained:

“target” “prediction”
Overfitting: Why Limiting Capacity Can Help
30

25

20
Degree 15 polynomial
15

10

-5

-10

-15
0 2 4 6 8 10 12 14 16 18 20
Reinforcement Learning -- Overview
o Passive Reinforcement Learning (= how to learn from experiences)
o Model-based Passive RL
o Learn the MDP model from experiences, then solve the MDP
o Model-free Passive RL
o Forego learning the MDP model, directly learn V or Q:
o Value learning – learns value of a fixed policy; 2 approaches: Direct Evaluation & TD Learning
o Q learning – learns Q values of the optimal policy (uses a Q version of TD Learning)
o Active Reinforcement Learning (= agent also needs to decide how to collect experiences)
o Key challenges:
o How to efficiently explore?
o How to trade off exploration <> exploitation
o Applies to both model-based and model-free. In CS188 we’ll cover only in context of Q-learning
o Approximate Reinforcement Learning (= to handle large state spaces)
o Approximate Q-Learning
o Policy Search
Policy Search
Policy Search
o Problem: often the feature-based policies that work well (win games, maximize
utilities) aren’t the ones that approximate V / Q best
o E.g. your value functions from project 2 were probably horrible estimates of future rewards,
but they still produced good decisions
o Q-learning’s priority: get Q-values close (modeling)
o Action selection priority: get ordering of Q-values right (prediction)
o We’ll see this distinction between modeling and prediction again later in the course

o Solution: learn policies that maximize rewards, not the values that predict them

o Policy search: start with an ok solution (e.g. Q-learning) then fine-tune by hill
climbing on feature weights
Policy Search
o Simplest policy search:
o Start with an initial linear value function or Q-function
o Nudge each feature weight up and down and see if your policy is better than
before

o Problems:
o How do we tell the policy got better?
o Need to run many sample episodes!
o If there are a lot of features, this can be impractical

o Better methods exploit lookahead structure, sample wisely, change


multiple parameters…
RL: Helicopter Flight

[Andrew Ng] [Video: HELICOPTER]


RL: Learning Locomotion

[Schulman, Moritz, Levine, Jordan, Abbeel, ICLR 2016] [Video: GAE]


Conclusion of CS188 Part I

o We’re done with Part I: Search and


Planning!

o We’ve seen how AI methods can solve


problems in:
o Search
o Constraint Satisfaction Problems
o Games
o Markov Decision Problems
o Reinforcement Learning

o Next up: Part II: Uncertainty and


Learning!

You might also like