0% found this document useful (0 votes)
6 views

Unit 03 RL Problem

The document discusses the fundamentals of Reinforcement Learning, focusing on the Agent-Environment interface, Markov Decision Processes (MDPs), and the concepts of goals, rewards, returns, and value functions. It explains how agents interact with their environment to maximize cumulative rewards over time, the importance of defining policies, and the distinction between episodic and continuing tasks. The document also outlines the optimal policies and value functions that guide agents in achieving their goals effectively.

Uploaded by

alka.taur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Unit 03 RL Problem

The document discusses the fundamentals of Reinforcement Learning, focusing on the Agent-Environment interface, Markov Decision Processes (MDPs), and the concepts of goals, rewards, returns, and value functions. It explains how agents interact with their environment to maximize cumulative rewards over time, the importance of defining policies, and the distinction between episodic and continuing tasks. The document also outlines the optimal policies and value functions that guide agents in achieving their goals effectively.

Uploaded by

alka.taur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

REINFORCEMENT LEARNING

Unit 03: The Reinforcement Learning Problem

Syllabus:The Agent–Environment Interface, Goals and Rewards, Returns, Unified Notation for
Episodic and Continuing Tasks, Value Functions, Optimal Value Functions, Optimality and
Approximation

3.1 The Agent–Environment Interface


Expected Question: What is Markov Property? Explain concept of MDP with its elements. (04 Marks
Oct/Nov 2022)
Markov Decision Processes (MDPs) are meant to be a straightforward framing of the problem of learning
frominteraction to achieve a goal. The learner and decision maker is called the agent. Thething it interacts
with, comprising everything outside the agent, is called the environment.
These interact continually, the agent selecting actions and the environment responding tothese actions and
presenting new situations to the agent.The environment also givesrise to rewards, special numerical values
that the agent seeks to maximize over timethrough its choice of actions.

More specifically, the agent and environment interact at each of a sequence of discretetime steps, t = 0, 1, 2,
3,.Onetime step later, in part as a consequence of its action, the agent receives a numericalreward, Rt+1 ∊ R ⊂
R, and finds itself in a new state, St+1. The MDP and agenttogether thereby give rise to a sequence or
trajectory that begins like this:

In a finite MDP, the sets of states, actions, and rewards (S, A, and R) all have a finitenumber of elements.In
this case, the random variables Rt and St have well defineddiscrete probability distributions dependent only
on the preceding state and action.there is a probabilityof those values occurring at time t, given particular
values of the preceding state andaction:

The function p defines the dynamics of the MDP.The dot over the equals sign in the equation reminds us that
it is a definition (in thiscase of the function p).Thedynamics function p : S x R x S x A → [0, 1] is an ordinary
deterministic function of fourarguments.The ‘|’ in the middle of it comes from the notation for conditional
probability,but here it just reminds us that p specifies a probability distribution for each choice of sand a, that
is, that

1
REINFORCEMENT LEARNING

In a Markov decision process, the probabilities given by p completely characterize theenvironment’s dynamics.
That is, the probability of each possible value for St and Rtdepends on the immediately preceding state and
action, St−1 and At−1, and, given them,not at all on earlier states and actions.

The state must include information about all aspectsof the past agent–environment interaction that make a
difference for the future. If itdoes, then the state is said to have the Markov property.
From the four-argument dynamics function, p, one can compute anything else one mightwant to know about
the environment, such as the state-transition probabilities (which wedenote, with a slight abuse of notation,
as a three-argument function p : SxSxA→ [0, 1]),

We can also compute the expected rewards for state–action pairs as a two-argumentfunction r : S x A →ℝ:

and the expected rewards for state–action–next-state triples as a three-argument functionr : S x A x S →ℝ,

The MDP framework is abstract and flexible and can be applied to many differentproblems in many different
ways. For example, the time steps need not refer to fixedintervals of real time; they can refer to arbitrary
successive stages of decision makingand acting.
Actions can be any decisions we want to learn how to make, andstates can be anything we can know that might
be useful in making them.
In particular, the boundary between agent and environment is typically not the sameas the physical boundary
of a robot’s or an animal’s body. Usually, the boundary isdrawn closer to the agent than that.
The general rule we follow is that anything that cannot be changed arbitrarily bythe agent is considered to be
outside of it and thus part of its environment. We donot assume that everything in the environment is unknown
to the agent.
in some cases the agent mayknow everything about how its environment works and still face a difficult
reinforcementlearning task, just as we may know exactly how a puzzle like Rubik’s cube works, butstill be
unable to solve it. The agent–environment boundary represents the limit of theagent’s absolute control, not of
its knowledge.
The agent–environment boundary can be located at different places for differentpurposes. In a complicated
robot, many different agents may be operating at once, eachwith its own boundary.
The MDP framework is a considerable abstraction of the problem of goal-directedlearning from interaction. It
proposes that whatever the details of the sensory, memory,and control apparatus, and whatever objective one
is trying to achieve, any problem oflearning goal-directed behavior can be reduced to three signals passing

2
REINFORCEMENT LEARNING
back and forthbetween an agent and its environment: one signal to represent the choices made by theagent
(the actions), one signal to represent the basis on which the choices are made (thestates), and one signal to
define the agent’s goal (the rewards). This framework may notbe sufficient to represent all decision-learning
problems usefully, but it has proved to bewidely useful and applicable.
Grid world Example:

3.2 Goals and Rewards


In reinforcement learning, the purpose or goal of the agent is formalized in terms of aspecial signal, called the
reward, passing from the environment to the agent. At each timestep, the reward is a simple number, Rt ∊ℝ.
Informally, the agent’s goal is to maximizethe total amount of reward it receives. This means maximizing not
immediate reward,but cumulative reward in the long run. We can call it as reward hypothesis.
That all of what we mean by goals and purposes can be well thought of asthe maximization of the expected
value of the cumulative sum of a receivedscalar signal (called reward).
Although formulating goals in terms of reward signals might at first appear limiting,in practice it has proved
to be flexible and widely applicable.
For example, to make a robotlearn to walk, researchers have provided reward on each time step proportional
to therobot’s forward motion. In making a robot learn how to escape from a maze, the rewardis often −1 for
every time step that passes prior to escape; this encourages the agent toescape as quickly as possible.
For an agent to learn to play checkersor chess, the natural rewards are +1 for winning, −1 for losing, and 0 for
drawing andfor all nonterminal positions.
In above examples, the agent always learns tomaximize its reward. If we want it to do something for us, we
must provide rewardsto it in such a way that in maximizing them the agent will also achieve our goals.
It is thus critical that the rewards we set up truly indicate what we want accomplished.In particular, the reward
signal is not the place to impart to the agent prior knowledgeabout how to achieve what we want it to do.
For example, a chess-playing agent shouldbe rewarded only for actually winning, not for achieving subgoals
such as taking itsopponent’s pieces or aining control of the center of the board. If achieving these sortsof
subgoals were rewarded, then the agent might find a way to achieve them withoutachieving the real goal.
The reward signal is your way of communicating tothe agent what you want achieved, not how you want it
achieved.

3.3 Returns and Episodes


Theagent’s goal is to maximize the cumulative reward it receives in the long run.In general, we seek to
maximize the expected return, where the return, denoted Gt, isdefined as some specific function of the reward
sequence. In the simplest case the returnis the sum of the rewards:

where T is a final time step. This approach makes sense in applications in which thereis a natural notion of
final time step, that is, when the agent–environment interactionbreaks naturally into subsequences, which we
call episodes. Each episode ends in a specialstate called the terminal state, followed by a reset to a standard
starting state or to asample from a standard distribution of starting states.Even if you think of episodes
asending in different ways, such as winning and losing a game, the next episode beginsindependently of how

3
REINFORCEMENT LEARNING
the previous one ended. Thus, the episodes can all be considered toend in the same terminal state, with
different rewards for the different outcomes. Taskswith episodes of this kind are called episodic tasks.
On the other hand, in many cases the agent–environment interaction does not breaknaturally into identifiable
episodes, but goes on continually without limit.
We call these continuing tasks. The returnformulation (3.7) is problematic for continuing tasks because the
final time step would beT = ∞, and the return, which is what we are trying to maximize, could easily be infinite.
The additional concept that we need is that of discounting. According to this approach,the agent tries to select
actions so that the sum of the discounted rewards it receives overthe future is maximized. In particular, it
chooses Atto maximize the expected discountedreturn:

where γ is a parameter, 0 <=γ<= 1, called the discount rate.


The discount rate determines the present value of future rewards: a reward receivedk time steps in the future
is worth only γk−1 times what it would be worth if it werereceived immediately.If γ< 1, the infinite sum in (3.8)
has a finite value as long as thereward sequence {Rk} is bounded.
If γ = 0, the agent is “myopic” in being concernedonly with maximizing immediate rewards: its objective in this
case is to learn how tochoose At so as to maximize only Rt+1. If each of the agent’s actions happened toinfluence
only the immediate reward, not future rewards as well, then a myopic agentcould maximize (3.8) by separately
maximizing each immediate reward.
As γ approaches 1, the return objective takes future rewards intoaccount more strongly; the agent becomes
more farsighted.Returns at successive time steps are related to each other in a way that is importantfor the
theory and algorithms of reinforcement learning:

Note that although the return (3.8) is a sum of an infinite number of terms, it is stillfinite if the reward is
nonzero and constant—if γ< 1. For example, if the reward is aconstant +1, then the return is

3.4 Unified Notation for Episodic and Continuing Tasks


We have two kinds of reinforcement learning tasks, onein which the agent–environment interaction naturally
breaks down into a sequence ofseparate episodes (episodic tasks), and one in which it does not (continuing
tasks). Theformer case is mathematically easier because each action affects only the finite number ofrewards
subsequently received during the episode.
It is therefore useful toestablish one notation that enables us to talk precisely about both cases simultaneously.
To be precise about episodic tasks requires some additional notation. Rather than onelong sequence of time
steps, we need to consider a series of episodes, each of which consistsof a finite sequence of time steps.

4
REINFORCEMENT LEARNING
We number the time steps of each episode startinganew from zero. Therefore, we have to refer not just to S t,
the state representation attime t, but to St ,i, the state representation at time t of episode i.

We are almost always consideringa particular episode, or stating something that is true for all episodes.
Accordingly, inpractice we almost always abuse notation slightly by dropping the explicit reference toepisode
number. That is, we write St to refer to St,i, and so on.
We need one other convention to obtain a single notation that covers both episodicand continuing tasks. We
have defined the return as a sum over a finite number of termsin one case (3.7) and as a sum over an infinite
number of terms in the other (3.8). Thesetwo can be unified by considering episode termination to be the
entering of a specialabsorbing state that transitions only to itself and that generates only rewards of zero.
Forexample, consider the state transition diagram:

Here the solid square represents the special absorbing state corresponding to the end of anepisode. Starting
from S0, we get the reward sequence +1, +1, +1, 0, 0, 0, . . .. Summingthese, we get the same return whether we
sum over the first T rewards (here T = 3) orover the full infinite sequence. This remains true even if we
introduce discounting.
Thus,we can define the return, in generalusing the convention of omittingepisode numbers when they are not
needed, and including the possibility that γ = 1 if thesum remains defined (e.g., because all episodes terminate).
Alternatively, we can write

including the possibility that T = ∞ or γ = 1 (but not both). We use these conventionsthroughout the rest of the
book to simplify notation and to express the close parallelsbetween episodic and continuing tasks.

3.5 Policies and Value Functions


Almost all reinforcement learning algorithms involve estimating value functions—functionsof states (or of
state–action pairs) that estimate how good it is for the agent to be in agiven state (or how good it is to perform
a given action in a given state).
The notionof “how good” here is defined in terms of future rewards that can be expected, or, tobe precise, in
terms of expected return. Accordingly, value functionsare defined with respect to particular ways of acting,
called policies.
Formally, a policy is a mapping from states to probabilities of selecting each possibleaction. If the agent is
following policy π at time t, then π(a|s) is the probability thatAt = a if St = s. Like p, π is an ordinary function;
the “|” in the middle of π (a|s)merely reminds us that it defines a probability distribution over a ∊ A(s) for each
s ∊ S.
Reinforcement learning methods specify how the agent’s policy is changed as a result ofits experience.
The value function of state s under a policy π, denoted vπ(s), is the expected returnwhen starting in s and
following π thereafter. For MDPs, we can define vπ formally by
5
REINFORCEMENT LEARNING

where Eπ[·] denotes the expected value of a random variable given that the agent followspolicy π, and t is any
time step. Note that the value of the terminal state, if any, isalways zero. We call the function vπ the state-value
function for policy π.
Similarly, we define the value of taking action a in state s under a policy π, denotedqπ(s, a), as the expected
return starting from s, taking the action a, and thereafterfollowing policy π:

We call qπ the action-value function for policy π.


A fundamental property of value functions used throughout reinforcement learning anddynamic programming
is that they satisfy recursive relationships similar to that whichwe have already established for the return (3.9).
For any policy π and any state s, thefollowing consistency condition holds between the value of s and the value
of its possiblesuccessor states:

Equation (3.14) is the Bellman equation for vπ. It expressesa relationship between the value of a state and
the values ofits successor states.
The Bellman equation averages over all the possibilities, weighting eachby its probability of occurring. It states
that the value of the start state must equal the(discounted) value of the expected next state, plus the reward
expected along the way.

3.6 Optimal Policies and Optimal Value Functions


Solving a reinforcement learning task means, roughly, finding a policy that achieves a lotof reward over the
long run. For finite MDPs, we can precisely define an optimal policyin the following way. Value functions define
a partial ordering over policies.

A policy π isdefined to be better than or equal to a policy π’ if its expected return is greater thanor equal to
that of π’ for all states.

There is always at least one policy that is better than or equal to all otherpolicies. This is an optimal policy.
Although there may be more than one, we denote allthe optimal policies by π*. They share the same state-
value function, called the optimalstate-value function, denoted v*, and defined as

6
REINFORCEMENT LEARNING
Optimal policies also share the same optimal action-value function, denoted q*, anddefined as

for all s ∊ S and a ∊ A(s). For the state–action pair (s, a), this function gives theexpected return for taking actiona
in state s and thereafter following an optimal policy.Thus, we can write q* in terms of v* as follows:

Because v* is the value function for a policy, it must satisfy the self-consistencycondition given by the Bellman
equation for state values. Because it is the optimalvalue function, however, v*’s consistency condition can be
written in a special formwithout reference to any specific policy. This is the Bellman equation for v *, or
theBellman optimality equation. Intuitively, the Bellman optimality equation expresses thefact that the value
of a state under an optimal policy must equal the expected return forthe best action from that state:

The last two equations are two forms of the Bellman optimality equation for v*. TheBellman optimality
equation for q* is

The backup diagrams in the figure below show graphically the spans of future statesand actions considered in
the Bellman optimality equations for v* and q*.The backup diagram on the leftgraphically represents the
Bellman optimality equation for v* and the backup diagramon the right graphically representsBellman
optimality equation for q*.

7
REINFORCEMENT LEARNING

The Bellman optimality equation is actually a system of equations, one for each state, soif there are n states,
then there are n equations in n unknowns. If the dynamics p of theenvironment are known, then in principle
one can solve this system of equations for v* using any one of a variety of methods for solving systems of
nonlinear equations.
Once one has v*, it is relatively easy to determine an optimal policy. For each states, there will be one or more
actions at which the maximum is obtained in the Bellmanoptimality equation. Any policy that assigns nonzero
probability only to these actions isan optimal policy.
Another way of saying this is that any policy that is greedy with respect to theoptimal evaluation function v*
is an optimal policy.
Having q* makes choosing optimal actions even easier. With q*, the agent does noteven have to do a one-step-
ahead search: for any state s, it can simply find any actionthat maximizes q*(s, a). The action-value function
effectively caches the results of allone-step-ahead searches. It provides the optimal expected long-term return
as a valuethat is locally and immediately available for each state–action pair.
Hence, at the cost ofrepresenting a function of state–action pairs, instead of just of states, the optimal
actionvaluefunction allows optimal actions to be selected without having to know anythingabout possible
successor states and their values, that is, without having to know anythingabout the environment’s dynamics.

3.7 Optimality and Approximation


A well-defined notion of optimality organizes theapproach to learning and provides a way to understand
thetheoretical properties of various learning algorithms, but it is an ideal that agents canonly approximate.
Even if we have a complete and accurate modelof the environment’s dynamics, it is usually not possible to
simply compute an optimalpolicy by solving the Bellman optimality equation.
For example, board games such aschess are a tiny fraction of human experience, yet large, custom-designed
computers stillcannot compute the optimal moves.
A critical aspect of the problem facing the agent isalways the computational power available to it, in particular,
the amount of computationit can perform in a single time step.
The memory available is also an important constraint. A large amount of memoryis often required to build up
approximations of value functions, policies, and models.In tasks with small, finite state sets, it is possible to
form these approximations usingarrays or tables with one entry for each state (or state–action pair).
This we call thetabular case, and the corresponding methods we call tabular methods. In many casesof
practical interest, however, there are far more states than could possibly be entriesin a table. In these cases the
functions must be approximated, using some sort of morecompact parameterized function representation.
Our framing of the reinforcement learning problem forces us to settle for approximations.However, it also
presents us with some unique opportunities for achievinguseful approximations.
8
REINFORCEMENT LEARNING
For example, in approximating optimal behavior, there may bemany states that the agent faces with such a low
probability that selecting suboptimalactions for them has little impact on the amount of reward the agent
receives.
The online nature of reinforcement learning makes it possible to approximateoptimal policies in ways that put
more effort into learning to make good decisions forfrequently encountered states, at the expense of less effort
for infrequently encounteredstates. This is one key property that distinguishes reinforcement learning from
otherapproaches to approximately solving MDPs.

You might also like