0% found this document useful (0 votes)
13 views

Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy 2024 scribd download

Abhishek

Uploaded by

aiymdiensi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy 2024 scribd download

Abhishek

Uploaded by

aiymdiensi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Experience Seamless Full Ebook Downloads for Every Genre at textbookfull.

com

Reinforcement Learning: With Open AI, TensorFlow


and Keras Using Python 1st Edition Abhishek Nandy

https://textbookfull.com/product/reinforcement-learning-
with-open-ai-tensorflow-and-keras-using-python-1st-edition-
abhishek-nandy/

OR CLICK BUTTON

DOWNLOAD NOW

Explore and download more ebook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Applied Reinforcement Learning with Python: With OpenAI


Gym, Tensorflow, and Keras Beysolow Ii

https://textbookfull.com/product/applied-reinforcement-learning-with-
python-with-openai-gym-tensorflow-and-keras-beysolow-ii/

textboxfull.com

Deep Learning with Python Develop Deep Learning Models on


Theano and TensorFLow Using Keras Jason Brownlee

https://textbookfull.com/product/deep-learning-with-python-develop-
deep-learning-models-on-theano-and-tensorflow-using-keras-jason-
brownlee/
textboxfull.com

Deep Learning Projects Using TensorFlow 2: Neural Network


Development with Python and Keras 1st Edition Vinita
Silaparasetty
https://textbookfull.com/product/deep-learning-projects-using-
tensorflow-2-neural-network-development-with-python-and-keras-1st-
edition-vinita-silaparasetty/
textboxfull.com

Deep Learning with Applications Using Python Chatbots and


Face, Object, and Speech Recognition With TensorFlow and
Keras Springerlink (Online Service)
https://textbookfull.com/product/deep-learning-with-applications-
using-python-chatbots-and-face-object-and-speech-recognition-with-
tensorflow-and-keras-springerlink-online-service/
textboxfull.com
Beginning Anomaly Detection Using Python-Based Deep
Learning: With Keras and PyTorch Sridhar Alla

https://textbookfull.com/product/beginning-anomaly-detection-using-
python-based-deep-learning-with-keras-and-pytorch-sridhar-alla/

textboxfull.com

Computer Vision Using Deep Learning Neural Network


Architectures with Python and Keras 1st Edition Vaibhav
Verdhan
https://textbookfull.com/product/computer-vision-using-deep-learning-
neural-network-architectures-with-python-and-keras-1st-edition-
vaibhav-verdhan/
textboxfull.com

Machine Learning Concepts with Python and the Jupyter


Notebook Environment: Using Tensorflow 2.0 Nikita
Silaparasetty
https://textbookfull.com/product/machine-learning-concepts-with-
python-and-the-jupyter-notebook-environment-using-
tensorflow-2-0-nikita-silaparasetty/
textboxfull.com

Natural language processing with TensorFlow Teach language


to machines using Python s deep learning library 1st
Edition Thushan Ganegedara
https://textbookfull.com/product/natural-language-processing-with-
tensorflow-teach-language-to-machines-using-python-s-deep-learning-
library-1st-edition-thushan-ganegedara/
textboxfull.com

Building an Enterprise Chatbot: Work with Protected


Enterprise Data Using Open Source Frameworks Abhishek
Singh
https://textbookfull.com/product/building-an-enterprise-chatbot-work-
with-protected-enterprise-data-using-open-source-frameworks-abhishek-
singh/
textboxfull.com
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python

Abhishek Nandy
Manisha Biswas
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python

Abhishek Nandy
Manisha Biswas
Reinforcement Learning
Abhishek Nandy Manisha Biswas
Kolkata, West Bengal, India North 24 Parganas, West Bengal, India
ISBN-13 (pbk): 978-1-4842-3284-2 ISBN-13 (electronic): 978-1-4842-3285-9
https://doi.org/10.1007/978-1-4842-3285-9
Library of Congress Control Number: 2017962867
Copyright © 2018 by Abhishek Nandy and Manisha Biswas
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole
or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical
way, and transmission or information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the
date of publication, neither the authors nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Cover image by Freepik (www.freepik.com)
Managing Director: Welmoed Spahr
Editorial Director: Todd Green
Acquisitions Editor: Celestin Suresh John
Development Editor: Matthew Moodie
Technical Reviewer: Avirup Basu
Coordinating Editor: Sanchita Mandal
Copy Editor: Kezia Endsley
Compositor: SPi Global
Indexer: SPi Global
Artist: SPi Global
Distributed to the book trade worldwide by Springer Science+Business Media New York,
233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505,
e-mail [email protected], or visit www.springeronline.com. Apress Media,
LLC is a California LLC and the sole member (owner) is Springer Science + Business Media
Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail [email protected], or visit
http://www.apress.com/rights-permissions.
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our
Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub via the book’s product page, located at www.apress.com/
978-1-4842-3284-2. For more detailed information, please visit http://www.apress.com/
source-code.
Printed on acid-free paper
Contents

About the Authors���������������������������������������������������������������������������� vii


About the Technical Reviewer���������������������������������������������������������� ix
Acknowledgments���������������������������������������������������������������������������� xi
Introduction������������������������������������������������������������������������������������ xiii


■Chapter 1: Reinforcement Learning Basics������������������������������������ 1
What Is Reinforcement Learning?����������������������������������������������������������� 1
Faces of Reinforcement Learning����������������������������������������������������������� 6
The Flow of Reinforcement Learning������������������������������������������������������ 7
Different Terms in Reinforcement Learning�������������������������������������������� 9
Gamma������������������������������������������������������������������������������������������������������������������� 10
Lambda������������������������������������������������������������������������������������������������������������������� 10

Interactions with Reinforcement Learning�������������������������������������������� 10


RL Characteristics�������������������������������������������������������������������������������������������������� 11
How Reward Works������������������������������������������������������������������������������������������������ 12
Agents�������������������������������������������������������������������������������������������������������������������� 13
RL Environments����������������������������������������������������������������������������������������������������� 14

Conclusion��������������������������������������������������������������������������������������������� 18

■Chapter 2: RL Theory and Algorithms������������������������������������������� 19
Theoretical Basis of Reinforcement Learning��������������������������������������� 19
Where Reinforcement Learning Is Used������������������������������������������������ 21
Manufacturing�������������������������������������������������������������������������������������������������������� 22
Inventory Management������������������������������������������������������������������������������������������� 22

iii
■ Contents

Delivery Management��������������������������������������������������������������������������������������������� 22
Finance Sector�������������������������������������������������������������������������������������������������������� 23

Why Is Reinforcement Learning Difficult?��������������������������������������������� 23


Preparing the Machine�������������������������������������������������������������������������� 24
Installing Docker����������������������������������������������������������������������������������� 36
An Example of Reinforcement Learning with Python���������������������������� 39
What Are Hyperparameters?���������������������������������������������������������������������������������� 41
Writing the Code����������������������������������������������������������������������������������������������������� 41

What Is MDP?���������������������������������������������������������������������������������������� 47
The Markov Property���������������������������������������������������������������������������������������������� 48
The Markov Chain��������������������������������������������������������������������������������������������������� 49
MDPs���������������������������������������������������������������������������������������������������������������������� 53

SARSA��������������������������������������������������������������������������������������������������� 54
Temporal Difference Learning�������������������������������������������������������������������������������� 54
How SARSA Works�������������������������������������������������������������������������������������������������� 56

Q Learning��������������������������������������������������������������������������������������������� 56
What Is Q?�������������������������������������������������������������������������������������������������������������� 57
How to Use Q���������������������������������������������������������������������������������������������������������� 57
SARSA Implementation in Python��������������������������������������������������������������������������� 58
The Entire Reinforcement Logic in Python������������������������������������������������������������� 64

Dynamic Programming in Reinforcement Learning������������������������������ 68


Conclusion��������������������������������������������������������������������������������������������� 69

■Chapter 3: OpenAI Basics������������������������������������������������������������� 71
Getting to Know OpenAI������������������������������������������������������������������������ 71
Installing OpenAI Gym and OpenAI Universe����������������������������������������� 73
Working with OpenAI Gym and OpenAI������������������������������������������������� 75
More Simulations���������������������������������������������������������������������������������� 81

iv
■ Contents

OpenAI Universe������������������������������������������������������������������������������������ 84
Conclusion��������������������������������������������������������������������������������������������� 87

■Chapter 4: Applying Python to Reinforcement Learning�������������� 89
Q Learning with Python������������������������������������������������������������������������� 89
The Maze Environment Python File������������������������������������������������������������������������ 91
The RL_Brain Python File��������������������������������������������������������������������������������������� 94
Updating the Function�������������������������������������������������������������������������������������������� 95

Using the MDP Toolbox in Python���������������������������������������������������������� 97


Understanding Swarm Intelligence����������������������������������������������������� 109
Applications of Swarm Intelligence���������������������������������������������������������������������� 109
Swarm Grammars������������������������������������������������������������������������������������������������� 111
The Rastrigin Function������������������������������������������������������������������������������������������ 111
Swarm Intelligence in Python������������������������������������������������������������������������������� 116

Building a Game AI������������������������������������������������������������������������������ 119


The Entire TFLearn Code��������������������������������������������������������������������������������������� 124

Conclusion������������������������������������������������������������������������������������������� 128
■■Chapter 5: Reinforcement Learning with Keras,
TensorFlow, and ChainerRL�������������������������������������������������������� 129
What Is Keras?������������������������������������������������������������������������������������ 129
Using Keras for Reinforcement Learning�������������������������������������������� 130
Using ChainerRL���������������������������������������������������������������������������������� 134
Installing ChainerRL���������������������������������������������������������������������������������������������� 134
Pipeline for Using ChainerRL�������������������������������������������������������������������������������� 137

Deep Q Learning: Using Keras and TensorFlow����������������������������������� 145


Installing Keras-rl������������������������������������������������������������������������������������������������� 146
Training with Keras-rl������������������������������������������������������������������������������������������� 148

Conclusion������������������������������������������������������������������������������������������� 153

v
■ Contents

■■Chapter 6: Google’s DeepMind and the Future of


Reinforcement Learning������������������������������������������������������������� 155
Google DeepMind�������������������������������������������������������������������������������� 155
Google AlphaGo����������������������������������������������������������������������������������� 156
What Is AlphaGo?�������������������������������������������������������������������������������������������������� 157
Monte Carlo Search���������������������������������������������������������������������������������������������� 159
Man vs. Machines������������������������������������������������������������������������������� 161
Positive Aspects of AI������������������������������������������������������������������������������������������� 161
Negative Aspects of AI������������������������������������������������������������������������������������������ 161

Conclusion������������������������������������������������������������������������������������������� 163

Index���������������������������������������������������������������������������������������������� 165

vi
About the Authors

Abhishek Nandy has a B.Tech. in information


technology and considers himself a constant learner.
He is a Microsoft MVP in the Windows platform, an
Intel Black Belt Developer, as well as an Intel software
innovator. Abhishek has a keen interest in artificial
intelligence, IoT, and game development. He is
currently serving as an application architect at an IT
firm and consults in AI and IoT, as well does projects
in AI, Machine Learning, and deep learning. He is also
an AI trainer and drives the technical part of Intel AI
student developer program. He was involved in the first
Make in India initiative, where he was among the top
50 innovators and was trained in IIMA.

Manisha Biswas has a B.Tech. in information


technology and currently works as a software developer
at InSync Tech-Fin Solutions Ltd in Kolkata, India. She
is involved in several areas of technology, including
web development, IoT, soft computing, and artificial
intelligence. She is an Intel Software innovator and was
awarded the Shri Dewang Mehta IT Awards 2016 by
NASSCOM, a certificate of excellence for top academic
scores. She very recently formed a “Women in
Technology” community in Kolkata, India to empower
women to learn and explore new technologies. She
likes to invent things, create something new, and
invent a new look for the old things. When not in front
of her terminal, she is an explorer, a foodie, a doodler,
and a dreamer. She is always very passionate to share
her knowledge and ideas with others. She is following
her passion currently by sharing her experiences with the community so that others can
learn, which lead her to become Google Women Techmakers, Kolkata Chapter Lead.

vii
About the Technical
Reviewer

Avirup Basu is an IoT application developer at


Prescriber360 Solutions. He is a researcher in robotics
and has published papers through the IEEE.

ix
Acknowledgments

I want to dedicate this book to my parents.


—Abhishek Nandy

I want to dedicate this book to my mom and dad. Thank you to my teachers and my
co-author, Abhishek Nandy. Thanks also to Abhishek Sur, who mentors me at work
and helps me adapt to new technologies. I would also like to dedicate this book to my
company, InSync Tech-Fin Solutions Ltd., where I started my career and have grown
professionally.

—Manisha Biswas

xi
Introduction

This book is primarily based on a Machine Learning subset known as Reinforcement


Learning. We cover the basics of Reinforcement Learning with the help of the Python
programming language and touch on several aspects, such as Q learning, MDP, RL with
Keras, and OpenAI Gym and OpenAI Environment, and also cover algorithms related
to RL.
Users need a basic understanding of programming in Python to benefit from this
book.
The book is meant for people who want to get into Machine Learning and learn more
about Reinforcement Learning.

xiii
CHAPTER 1

Reinforcement Learning
Basics

This chapter is a brief introduction to Reinforcement Learning (RL) and includes some
key concepts associated with it.
In this chapter, we talk about Reinforcement Learning as a core concept and then
define it further. We show a complete flow of how Reinforcement Learning works. We
discuss exactly where Reinforcement Learning fits into artificial intelligence (AI). After
that we define key terms related to Reinforcement Learning. We start with agents and
then touch on environments and then finally talk about the connection between agents
and environments.

What Is Reinforcement Learning?


We use Machine Learning to constantly improve the performance of machines or
programs over time. The simplified way of implementing a process that improves
machine performance with time is using Reinforcement Learning (RL). Reinforcement
Learning is an approach through which intelligent programs, known as agents, work
in a known or unknown environment to constantly adapt and learn based on giving
points. The feedback might be positive, also known as rewards, or negative, also
called punishments. Considering the agents and the environment interaction, we then
determine which action to take.
In a nutshell, Reinforcement Learning is based on rewards and punishments.
Some important points about Reinforcement Learning:
• It differs from normal Machine Learning, as we do not look at
training datasets.
• Interaction happens not with data but with environments,
through which we depict real-world scenarios.

© Abhishek Nandy and Manisha Biswas 2018 1


A. Nandy and M. Biswas, Reinforcement Learning,
https://doi.org/10.1007/978-1-4842-3285-9_1
Chapter 1 ■ Reinforcement Learning Basics

• As Reinforcement Learning is based on environments, many


parameters come in to play. It takes lots of information to learn
and act accordingly.
• Environments in Reinforcement Learning are real-world
scenarios that might be 2D or 3D simulated worlds or game-
based scenarios.
• Reinforcement Learning is broader in a sense because the
environments can be large in scale and there might be a lot of
factors associated with them.
• The objective of Reinforcement Learning is to reach a goal.
• Rewards in Reinforcement Learning are obtained from the
environment.
The Reinforcement Learning cycle is depicted in Figure 1-1 with the help of a robot.

Figure 1-1. Reinforcement Learning cycle

2
Chapter 1 ■ Reinforcement Learning Basics

A maze is a good example that can be studied using Reinforcement Learning, in


order to determine the exact right moves to complete the maze (see Figure 1-2).

Figure 1-2. Reinforcement Learning can be applied to mazes

In Figure 1-3, we are applying Reinforcement Learning and we call it the


Reinforcement Learning box because within its vicinity the process of RL works. RL starts
with an intelligent program, known as agents, and when they interact with environments,
there are rewards and punishments associated. An environment can be either known
or unknown to the agents. The agents take actions to move to the next state in order to
maximize rewards.

3
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-3. Reinforcement Learning flow

In the maze, the centralized concept is to keep moving. The goal is to clear the maze
and reach the end as quickly as possible.
The following concepts of Reinforcement Learning and the working scenario are
discussed later this chapter.
• The agent is the intelligent program
• The environment is the maze
• The state is the place in the maze where the agent is
• The action is the move we take to move to the next state
• The reward is the points associated with reaching a particular
state. It can be positive, negative, or zero
We use the maze example to apply concepts of Reinforcement Learning. We will be
describing the following steps:

1. The concept of the maze is given to the agent.


2. There is a task associated with the agent and Reinforcement
Learning is applied to it.
3. The agent receives (a-1) reinforcement for every move it
makes from one state to other.
4. There is a reward system in place for the agent when it moves
from one state to another.

4
Chapter 1 ■ Reinforcement Learning Basics

The rewards predictions are made iteratively, where we update the value of each
state in a maze based on the value of the best subsequent state and the immediate reward
obtained. This is called the update rule.
The constant movement of the Reinforcement Learning process is based on
decision-making.
Reinforcement Learning works on a trial-and-error basis because it is very difficult to
predict which action to take when it is in one state. From the maze problem itself, you can
see that in order get the optimal path for the next move, you have to weigh a lot of factors.
It is always on the basis of state action and rewards. For the maze, we have to compute
and account for probability to take the step.
The maze also does not consider the reward of the previous step; it is specifically
considering the move to the next state. The concept is the same for all Reinforcement
Learning processes.
Here are the steps of this process:
1. We have a problem.
2. We have to apply Reinforcement Learning.
3. We consider applying Reinforcement Learning as a
Reinforcement Learning box.
4. The Reinforcement Learning box contains all essential
components needed for applying the Reinforcement Learning
process.
5. The Reinforcement Learning box contains agents,
environments, rewards, punishments, and actions.
Reinforcement Learning works well with intelligent program agents that give rewards
and punishments when interacting with an environment.
The interaction happens between the agents and the environments, as shown in
Figure 1-4.

Figure 1-4. Interaction between agents and environments

From Figure 1-4, you can see that there is a direct interaction between the agents and
its environments. This interaction is very important because through these exchanges,
the agent adapts to the environments. When a Machine Learning program, robot, or
Reinforcement Learning program starts working, the agents are exposed to known or
unknown environments and the Reinforcement Learning technique allows the agents to
interact and adapt according to the environment’s features.
Accordingly, the agents work and the Reinforcement Learning robot learns. In order
to get to a desired position, we assign rewards and punishments.

5
Chapter 1 ■ Reinforcement Learning Basics

Now, the program has to work around the optimal path to get maximum rewards if
it fails (that is, it takes punishments or receives negative points). In order to reach a new
position, which also is known as a state, it must perform what we call an action.
To perform an action, we implement a function, also known as a policy. A policy is
therefore a function that does some work.

Faces of Reinforcement Learning


As you see from the Venn diagram in Figure 1-5, Reinforcement Learning sits at the
intersection of many different fields of science.

Figure 1-5. All the faces of Reinforcement Learning

6
Chapter 1 ■ Reinforcement Learning Basics

The intersection points reveal a very strong feature of Reinforcement Learning—it


shows the science of decision-making. If we have two paths and have to decide which
path to take so that some point is met, a scientific decision-making process can be
designed.
Reinforcement Learning is the fundamental science of optimal decision-making.
If we focus on the computer science part of the Venn diagram in Figure 1-5, we
see that if we want to learn, it falls under the category of Machine Learning, which is
specifically mapped to Reinforcement Learning.
Reinforcement Learning can be applied to many different fields of science. In
engineering, we have devices that focus mostly on optimal control. In neuroscience, we
are concerned with how the brain works as a stimulant for making decisions and study
the reward system that works on the brain (the dopamine system).
Psychologists can apply Reinforcement Learning to determine how animals make
decisions. In mathematics, we have a lot of data applying Reinforcement Learning in
operations research.

The Flow of Reinforcement Learning


Figure 1-6 connects agents and environments.

Figure 1-6. RL structure

The interaction happens from one state to another. The exact connection starts
between an agent and the environment. Rewards are happening on a regular basis.
We take appropriate actions to move from one state to another.
The key points of consideration after going through the details are the following:
• The Reinforcement Learning cycle works in an interconnected
manner.
• There is distinct communication between the agent and the
environment.
• The distinct communication happens with rewards in mind.
• The object or robot moves from one state to another.
• An action is taken to move from one state to another

7
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-7 simplifies the interaction process.

Figure 1-7. The entire interaction process

An agent is always learning and finally makes a decision. An agent is a learner, which
means there might be different paths. When the agent starts training, it starts to adapt and
intelligently learns from its surroundings.
The agent is also a decision maker because it tries to take an action that will get it the
maximum reward.
When the agent starts interacting with the environment, it can choose an action and
respond accordingly.
From then on, new scenes are created. When the agent changes from one place to
another in an environment, every change results in some kind of modification. These
changes are depicted as scenes. The transition that happens in each step helps the agent
solve the Reinforcement Learning problem more effectively.

8
Chapter 1 ■ Reinforcement Learning Basics

Let’s look at another scenario of state transitioning, as shown in Figures 1-8 and 1-9.

Figure 1-8. Scenario of state changes

Figure 1-9. The state transition process

Learn to choose actions that maximize the following:

r0 +γr1 +γ2r2 +............... where 0< γ<1

At each state transition, the reward is a different value, hence we describe reward
with varying values in each step, such as r0, r1, r2, etc. Gamma (γ) is called a discount
factor and it determines what future reward types we get:
• A gamma value of 0 means the reward is associated with the
current state only
• A gamma value of 1 means that the reward is long-term

Different Terms in Reinforcement Learning


Now we cover some common terms associated with Reinforcement Learning.
There are two constants that are important in this case—gamma (γ) and lambda (λ),
as shown in Figure 1-10.

9
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-10. Showing values of constants

Gamma is common in Reinforcement Learning problems but lambda is used


generally in terms of temporal difference problems.

Gamma
Gamma is used in each state transition and is a constant value at each state change.
Gamma allows you to give information about the type of reward you will be getting in
every state. Generally, the values determine whether we are looking for reward values in
each state only (in which case, it’s 0) or if we are looking for long-term reward values (in
which case it’s 1).

Lambda
Lambda is generally used when we are dealing with temporal difference problems. It is
more involved with predictions in successive states.
Increasing values of lambda in each state shows that our algorithm is learning fast.
The faster algorithm yields better results when using Reinforcement Learning techniques.
As you’ll learn later, temporal differences can be generalized to what we call
TD(Lambda). We discuss it in greater depth later.

Interactions with Reinforcement Learning


Let’s now talk about Reinforcement Learning and its interactions. As shown in
Figure 1-11, the interactions between the agent and the environment occur with a reward.
We need to take an action to move from one state to another.

10
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-11. Reinforcement Learning interactions

Reinforcement Learning is a way of implementing how to map situations to actions


so as to maximize and find a way to get the highest rewards.
The machine or robot is not told which actions to take, as with other forms of
Machine Learning, but instead the machine must discover which actions yield the
maximum reward by trying them.
In the most interesting and challenging cases, actions affect not only the immediate
reward but also the next situation and all subsequent rewards.

RL Characteristics
We talk about characteristics next. The characteristics are generally what the agent does
to move to the next state. The agent considers which approach works best to make the
next move.
The two characteristics are
• Trial and error search.
• Delayed reward.
As you probably have gathered, Reinforcement Learning works on three things
combined:

(S,A,R)

Where S represents state, A represents action, and R represents reward.


If you are in a state S, you perform an action A so that you get a reward R at time
frame t+1. Now, the most important part is when you move to the next state. In this case,
we do not use the reward we just earned to decide where to move next. Each transition
has a unique reward and no reward from any previous state is used to determine the next
move. See Figure 1-12.

11
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-12. State change with time

The T change (the time frame) is important in terms of Reinforcement Learning.


Every occurrence of what we do is always a combination of what we perform in terms
of states, actions, and rewards. See Figure 1-13.

Figure 1-13. Another way of representing the state transition

How Reward Works


A reward is some motivator we receive when we transition from one state to another. It
can be points, as in a video game. The more we train, the more accurate we become, and
the greater our reward.

12
Chapter 1 ■ Reinforcement Learning Basics

Agents
In terms of Reinforcement Learning, agents are the software programs that make
intelligent decisions. Agents should be able to perceive what is happening in the
environment. Here are the basic steps of the agents:
1. When the agent can perceive the environment, it can make
better decisions.
2. The decision the agents take results in an action.
3. The action that the agents perform must be the best, the
optimal, one.
Software agents might be autonomous or they might work together with other agents
or with people. Figure 1-14 shows how the agent works.

Figure 1-14. The flow of the environment

13
Chapter 1 ■ Reinforcement Learning Basics

RL Environments
The environments in the Reinforcement Learning space are comprised of certain factors
that determine the impact on the Reinforcement Learning agent. The agent must adapt
accordingly to the environment. These environments can be 2D worlds or grids or even a
3D world.
Here are some important features of environments:
• Deterministic
• Observable
• Discrete or continuous
• Single or multiagent.

Deterministic
If we can infer and predict what will happen with a certain scenario in the future, we say
the scenario is deterministic.
It is easier for RL problems to be deterministic because we don’t rely on the
decision-making process to change state. It’s an immediate effect that happens with state
transitions when we are moving from one state to another. The life of a Reinforcement
Learning problem becomes easier.
When we are dealing with RL, the state model we get will be either deterministic or
non-deterministic. That means we need to understand the mechanisms behind how DFA
and NDFA work.

DFA (Deterministic Finite Automata)


DFA goes through a finite number of steps. It can only perform one action for a state. See
Figure 1-15.

Figure 1-15. Showing DFA

14
Chapter 1 ■ Reinforcement Learning Basics

We are showing a state transition from a start state to a final state with the help of
a diagram. It is a simple depiction where we can say that, with some input value that is
assumed as 1 and 0, the state transition occurs. The self-loop is created when it gets a
value and stays in the same state.

NDFA (Nondeterministic Finite Automaton)


If we are working in a scenario where we don’t know exactly which state a machine will
move into, this is a case of NDFA. See Figure 1-16.

Figure 1-16. NDFA

The working principle of the state diagram in Figure 1-16 can be explained as
follows. In NDFA the issue is when we are transitioning from one state to another, there is
more than one option available, as we can see in Figure 1-16. From State S0 after getting
an input such as 0, it can stay in state S0 or move to state S1. There is decision-making
involved here, so it becomes difficult to know which action to take.

Observable
If we can say that the environment around us is fully observable, we have a perfect
scenario for implementing Reinforcement Learning.
An example of perfect observability is a chess game. An example of partial
observability is a poker game, where some of the cards are unknown to any one player.

15
Chapter 1 ■ Reinforcement Learning Basics

Discrete or Continuous
If there is more than one choice for transitioning to the next state, that is a continuous
scenario. When there are a limited number of choices, that’s called a discrete scenario.

Single Agent and Multiagent Environments


Solutions in Reinforcement Learning can be of single agent types or multiagent types.
Let’s take a look at multiagent Reinforcement Learning first.
When we are dealing with complex problems, we use multiagent Reinforcement
Learning. Complex problems might have different environments where the agent is doing
different jobs to get involved in RL and the agent also wants to interact. This introduces
different complications in determining transitions in states.
Multiagent solutions are based on the non-deterministic approach.
They are non-deterministic because when the multiagents interact, there might be
more than one option to change or move to the next state and we have to make decisions
based on that ambiguity.
In multiagent solutions, the agent interactions between different environments are
enormous. They are enormous because the amount of activity involved in references to
environments is very large. This is because the environments might be different types and
the multiagents might have different tasks to do in each state transition.
The difference between single-agent and multiagent solutions are as follows:
• Single-agent scenarios involve intelligent software in which the
interaction happens in one environment only. If there is another
environment simultaneously, it cannot interact with the first
environment.
• When there is little bit of convergence in Reinforcement
Learning. Convergence is when the agent needs to interact far
more often in different environments to make a decision. This
scenario is tackled by multiagents, as single agents cannot tackle
convergence. Single agents cannot tackle convergence because
it connects to other environments when there might be different
scenarios involving simultaneous decision-making.
• Multiagents have dynamic environments compared to
single agents. Dynamic environments can involve changing
environments in the places to interact with.

16
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-17 shows the single-agent scenario.

Figure 1-17. Single agent

Figure 1-18 shows how multiagents work. There is an interaction between two agents
in order to make the decision.

17
Chapter 1 ■ Reinforcement Learning Basics

Figure 1-18. Multiagent scenario

Conclusion
This chapter touched on the basics of Reinforcement Learning and covered some key
concepts. We covered states and environments and how the structure of Reinforcement
Learning looks.
We also touched on the different kinds of interactions and learned about single-
agent and multiagent solutions.
The next chapter covers algorithms and discusses the building blocks of
Reinforcement Learning.

18
Exploring the Variety of Random
Documents with Different Content
Defects.—Too light or heavy a head; too highly arched frontal
bone; large ears, and hanging flat to face; short neck; full dewlap;
too narrow or too broad a chest; sunken, or hollow, or quite straight
back; bent fore legs, overbent fetlocks, twisted feet, spreading toes;
too curly a tail; weak hind quarters and a general want of muscle;
too short in body.
THE HOUND (RUSSIAN
WOLFHOUND).

George M. Keasbey’s, Newark, N. J.

Optimist.

Origin.—The Russians do not seem to have an


exact or even a clear idea as to the origin of this
breed. It in all probability came from Persia, as it is
more like the dog of that nation than it is like the
deerhound. It is of the greyhound family, however.
Uses.—Attacking wolves and foxes, though it can rarely single-
handed kill the former.
* Scale of Points, Etc.
Value.
Head and muzzle 15
Eyes and ears 10
Neck and chest 10
Back and loins 15
Ribs 5
Thighs and hocks 10
Legs and feet 10
Stern 5
Coat 5
General symmetry 15
Total 100
Head.—Generally very long, and lean throughout; flat, narrow
skull; stop hardly perceptible; long snout. Nose black, and frequently
Roman. Eyes dark, expressive, oblong. Ears small; thin leather; set
on high; tips almost touching when thrown back.
Neck.—Not too short, nor rising straight.
Shoulders.—Clean and sloping.
Chest.—Somewhat narrow, but not hollow.
Back.—Rather bony, and free from any hollow;
well arched in male, but level and broad in female;
loins broad and drooping; ribs deep, not so well
sprung as in the greyhound, reaching to elbow.
Fore Legs.—Lean and straight.
Hind Legs.—Somewhat under body when standing still; not
straight; stifles only slightly bent; legs not too far apart; pasterns
short.
Feet.—Long toes, closely joined; nails short and strong; feet
covered with fur like a hare.
Coat.—Long, soft, silky.
Tail.—Long and sickle-shaped.
Color.—Any color.
Height and Weight.—Height for a dog, from 28 to 31 inches at
shoulder; bitch, about 2 inches less. The male should be shorter in
body than the female. Weight: dog, 75 to 100 pounds; bitch, 60 to
75 pounds.
THE MASTIFF.

Champion Beaufort’s Black Prince.

Origin.—Its origin is purely conjectural. It certainly is a dog of the


British Isles, as at the time of Cæsar it was in existence there.
Uses.—A grand, awe-inspiring dog; an excellent guardian,
courageous and most companionable.
* Scale of Points, Etc.
Value.
Shape of skull 10
Girth of skull 10
Ears—carriage and size 5
Muzzle—bluntness, breadth, depth, lips, color (each 3
points) 15
Neck 4
Breadth of breast 4
Loins and back 4
Girth of chest 4
Shoulders 4
Length 4
Thighs 3
Stern 3
Legs 3
Feet 2
Size, height, and general appearance of bulk 15
Coat 5
Fawns, dark ears and muzzle, or brindle with dark ears and
muzzle 5
Total 100
General Character.—Large, massive, powerful, symmetrical, and
well knit; a combination of grandeur and good nature, courage and
docility.
Head.—Square when viewed from any point; breadth greatly to be
desired, and should be in ratio to length of the whole head and face
as 2 to 3.
Body.—Massive, broad, deep, long, powerfully built; legs wide
apart, squarely set; muscles sharply defined; size a great
desideratum if combined with quality; height of less importance than
substance.
Skull.—Broad between ears; forehead flat, wrinkled; muscles of
temples and cheeks well developed; arch across skull a rounded,
flattened curve, and a depression up center of forehead.
Face or Muzzle.—Short, broad under eyes, keeping nearly parallel
in width to end of nose; blunt, cut off square, thus forming a right
angle with upper line of face, of great depth from point of nose to
under jaw; under jaw broad to end. Teeth powerful, wide apart;
incisors level, or the lower projecting beyond the upper, but never
sufficiently so as to become visible when mouth is
closed. Length of muzzle to whole head and face as
1 to 3; circumference of muzzle (between eyes and
nose) to that of head (before the ears) as 3 to 5.
Ears small, thin, wide apart, set on high, flat, and
close to cheeks when in repose. Eyes small, wide
apart, divided by at least the space of two eyes;
stop well marked, but not too abrupt; color hazel
brown, the darker the better, showing no haw. Nose
broad, with widely spreading nostrils; flat (not pointed nor turned
up). Lips slightly pendulous.
Neck.—Slightly arched, moderately long, very muscular.
Chest.—Wide, deep, well let down; ribs arched and well rounded;
false ribs deep and well set back to hips.
Shoulder and Arm.—Slightly sloping, heavy and muscular.
Fore Legs and Feet.—Straight, strong, set wide apart; bones very
large; elbows square; pasterns upright. Feet large and round; toes
well arched up; nails black.
Back, Loins, and Flanks.—Wide and muscular; flat and very wide in
a bitch, slightly arched in a dog.
Hind Quarters and Thighs.—Broad, wide, muscular; well-developed
second thighs; stifles straight; hocks bent, wide apart, and squarely
set when standing or walking; feet round and without dew-claws.
Tail.—Put on high up, reaching to hocks, or a little below; wide at
root, tapering; hanging straight in repose, but forming a curve with
end pointing upward, but not over the back, when the dog is
excited.
Coat.—Short and close-lying, not too fine over shoulders, neck,
and back.
Color.—Apricot or silver fawn, or dark fawn-brindle; muzzle, ears,
and nose should be black, with black round the orbits and extending
upward between them. Fawns and brindles without dark points, reds
without black muzzle, and pies, award no points for color.
Weight.—Dogs 27 inches should weigh 120 pounds.
THE MEXICAN HAIRLESS.

Mrs. H. T. Foote’s, New Rochelle, N. Y.

Me Too.

Origin.—One of the oldest of known breeds, being found nearly all


over the world, but best known as coming from Mexico, where its
origin is unknown.
Uses.—A pet dog.
Description.—A smart-looking sort of terrier of
some kind, with perhaps a bit of greyhound blood in
it. There is no scale of points, nor is there any club
organized to foster the breed. It is entirely devoid of
hair, except sometimes a tuft or crest on its head
and a few straggling hairs on various parts of the
body. It is a lively little fellow, mostly of a brown
color, and, though devoid of hair, can stand the cold very well. It has
a rounded body, a bit cobby in appearance, with somewhat of a
terrier head. Weight is about 15 pounds.
THE NEWFOUNDLAND.

Boodles, Esq.

Origin.—Indigenous to Newfoundland, from which it takes its


name. This dog is probably a cross of some of the European dogs,
some writers claiming that it shows the blood of both the St.
Bernard and the water-spaniel.
Uses.—A good companion, and a water-dog as
well.
* No Scale of Points adopted.
Head.—Broad, massive, flat on skull; occipital
bone well developed; no decided stop. Muzzle short,
clean cut, and rather square in shape.
Coat.—Flat, dense, of coarsish texture, oily.
Body.—Well ribbed up; broad back; neck strong, and muscular
loins.
Fore Legs.—Straight, muscular; elbows well let down, and feather
all over.
Hind Quarters and Legs.—Hind quarters very strong; free action of
legs, which should have little feather. Dew-claws should be removed.
Chest.—Deep, fairly broad, well covered with hair, but no frill.
Bone.—Massive, but not giving a heavy appearance.
Feet.—Large; splayed or turned-out feet objectionable.
Tail.—Moderate length (to hocks); well covered with long hair, but
no flag; in repose it should hang downward with a curve at end.
Tails with kinks in them or carried over the back are objectionable.
Ears.—Small, set well back, square with the skull, close to head,
and covered with short hair; no fringe.
Eyes.—Small, dark brown, deeply set, but showing no haw; wide
apart.
Color.—Jet black; a tinge of bronze or splash of white on chest
and toes not objectionable.
Height and Weight.—Average height is 27 inches for dog, and 25
inches for bitch; weight, 100 and 85 pounds respectively.
Defects.—Slack loins and cow-hocked.
THE POINTER.

Westminster Kennel Club, Babylon, L. I.

King of Kent.

Origin.—The dog originally came from Spain, was imported into


Great Britain, and undergoing many changes, no doubt by crossing
with the foxhound, is to-day a vast improvement on the parent
stock.
Uses.—Hunting all kinds of game-birds.
Scale of Points, Etc.
Value.
Skull 10
Nose 10
Ears, eyes, and lips 4
Neck 6
Shoulders and chest 15
Hind quarters and stifles 15
Legs, elbows, and hocks 12
Feet 8
Tail 5
Coat 3
Color 5
Symmetry and quality 7
Total 100
Brains, nose, and speed make the pointer. The countenance
should be lively and intelligent.
Head.—Large, flat; stop well defined, and with a
depression running from stop to occiput. The head
should not be heavy, as this indicates coarseness
and an unreliable disposition. A full development of
occipital bone is indispensable. Nose large, long,
broad (black in all except lemons and whites, when it should be
deep flesh color), deep enough to make it “square”-muzzled; nostrils
large and open. Ears moderately long, filbert-shaped, and lying flat;
set low; leather thin and flexible; coverings silky. Eyes medium size,
not set wide apart, and of the various shades of brown, varying with
color of coat. Lips full; not thick nor pendulous.
Neck.—Arched, round, firm, and not too short; no
tendency to throatiness; no dewlap.
Shoulders.—Long, sloping, and powerful.
Chest.—Deep, with narrow sternum, sloping
backward to a well-tucked-up abdomen; ribs moderately sprung, not
flat.
Hind Quarters.—Loins should be broad and slightly arched; hips
thick, strong, and muscular; stifles are generally straight, but should
be well bent.
Legs.—Good legs are most essential; front legs should be straight
and strong, hind legs well crooked, and both should be covered with
strong, compact muscles, well developed; the elbow- and hock-joints
should be hinged long and set straight with sides of body.
Feet.—With surface enough to sustain the weight,
but not too large; round and cat-like; pads full and
tough; nails short and thick, with plenty of hair
between toes.
Tail.—Set on well up, and should taper to a
decided point; the straighter it is, the better. It should be carried
low, and the action should be free.
Coat.—Fairly dense, and not too soft.
Color.—Liver and white, black and white, orange and white, whole
black, or whole liver, this being the order of preference.
THE POMERANIAN.

Norman. Nellie.

Origin.—Pomerania, on the Baltic Sea, is probably


its home; it may, however, have come from the
arctic regions, as it closely resembles the Samoyeds
and Eskimo dogs.
Uses.—A pet dog.
* Scale of Points.
Value.
Head 10
Muzzle 5
Ears and eyes 5
Shoulders 5
Chest 5
Loins 10
Legs 10
Feet 10
Coat 15
Color 15
Tail 5
Symmetry 5
Total 100
Head.—Resembles greatly that of collie, being
even more tapering and fox-like; a slight furrow
down middle of forehead, and more brow than in
that dog; very prominent occiput; muzzle collie-like;
nose black at tip, even in perfectly white specimens;
also a slight tendency in upper teeth to be overshot.
Ears and Eyes.—Ears perfectly pricked, small, and neat in shape.
Eyes large, dark brown, and set obliquely, close together.
Shoulders.—Oblique.
Chest.—Generally round, with back ribs shallow.
Loins.—Frequently defective from shallow back ribs.
Legs.—Straight, muscular, with elbows well let down; good, strong
stifles; clean hocks.
Feet.—Small, round, cat-like; thin, and unfit for road-work.
Coat.—Like very coarse fur, with under coat furry also; face is bare
of hair.
Color.—Jet black without white; pure white is allowable, also red.
Tail.—Carried over back on one side (left); heavily feathered.
Symmetry.—Of the spitz style.
Weight.—Limit, 20 pounds; 7 pounds preferred.
THE POODLE (BLACK, CURLY-
COATED).

H. G. Trevor’s, Southampton, L. I.

Champion Milo.

Origin.—There is little doubt but that the poodle of to-day finds its
origin in the old “water-dog” of France, where it was not only used
for retrieving wounded water-fowl, but for swimming-contests, when
the hind parts were clipped or shaven in order to give freer action to
the legs.
Uses.—A very bright, intelligent companion, and a good retriever.
* Scale of Points, Etc.
Value.
Head, muzzle, and eyes 20
Neck and chest 5
Back and loins 10
Legs and feet 15
Stern 5
Coat 20
Color 10
Symmetry 15
Total 100
General Appearance.—Strong, active, intelligent, cobby in build, and
perfectly coated with close curls or long “cords.”
Head.—Long; skull large, wide between the eyes, slight
peak; parts over eyes well arched; the whole covered
with curls or cords. Muzzle long (not snipy), slightly
tapering, not too deep; stop well defined. Teeth level and
strong; black roof of mouth preferable. Eyes medium
size, dark, bright, and set at right angles with the line of
face. Nose large, perfectly black; wide-open nostrils. Ears very long,
close to cheek, low set, and well covered with ringlets or curls.
Neck.—Very strong, admitting head to be carried high.
Chest.—Fairly deep, but not too wide, well covered with muscles.
Legs.—Fore legs perfectly straight, and not so long as to be leggy;
hind legs muscular, well bent, with hocks low down.
Feet.—Strong, slightly spread, standing well on toes; nails black;
pads large and hard.
Back.—Fair length; well-ribbed-up body; loins strong and muscular.
Tail.—Carried at angle of 45 degrees, with long ringlets or cords.
Preferable length, 3 to 5 inches.
Coat.—If corded, cords should be thick and strong, hanging in
long, ropy cords. If curly, the curls close, thick, and of silky texture.
Weight.—From 40 to 60 pounds.
Only three colors are admitted, black, white, and red, and they
should be without mixture.
THE POODLE (BLACK, CORDED).

W. Grebe’s, 1398 Second Avenue, New York.

Tell.

For origin, uses, scale of points, etc., see The Poodle (Black, Curly-
coated).
THE POODLE (WHITE-AND-RED).
* Scale of Points, Etc.
Same as the black poodle, except:
Eyes.—Yellow or light blue, free from black rims around eyelids.
Nose.—Red or liver color.
Nails.—Red or pink.
Back.—Spots on back should be red or liver, and the entire body
free from black ticks.
THE PUG (FAWN).

Rookery Kennels, Painesville, O.

Haughty Madge.

Origin.—It is generally conceded that this breed is a cross between


the fawn-colored, smooth English terrier and the jet-black Chinese
terrier. Vero Shaw et al. concede this point.
Uses.—Purely a pet dog, with a fair amount of intelligence.
* Scale of Points, Etc.
Value.
Symmetry 10
Size 5
Condition 5
Body 10
Legs and feet 10
Head 5
Muzzle 5
Ears 5
Eyes 10
Mask 5
Wrinkles 5
Tail 5
Trace 5
Coat 5
Color 5
Carriage 5
Total 100
General Appearance.—Square and cobby.
Size and Condition.—The weight is from 12 to 15 pounds, and the
dog should be shown with bones well covered and muscles well
developed.
Body.—Short and cobby; chest wide; ribs well sprung.
Legs.—Strong, straight, of moderate length, and well under body.
Feet.—Between style of cat- and harefoot, well-split-up toes, and
black nails.
Muzzle.—Short, square, blunt, but not upturned.
Head.—Large, round, not apple-headed, with no
indentation of the skull. Eyes dark, very large, bold
and prominent, soft and solicitous in expression,
lustrous, and when excited full of fire. Ears thin,
small, and soft. The button-ear is preferred to the rose-ear.
Markings.—Clearly defined: the muzzle or mask, ears, moles on
cheeks, thumb-marks or diamond on forehead, and back trace
should be as black as possible.
Mask.—Black; the more defined, the better.
Wrinkles.—Deep and large.
Tail.—Curled lightly as possible over hip. The double curl is
perfection.
Coat.—Fine, smooth, soft, short, glossy, neither hard nor woolly.
Color.—Silver or apricot fawn. Each should be very decided, so as
to make a contrast between color and trace.
Defects.—Long-legged or short-legged.
THE PUG (BLACK).
Origin.—Beyond question a dog of Chinese origin, as Lady Brassey
brought several specimens from there to London, and other travelers
have seen them there.
Uses.—A pet dog only.
* Scale of Points, Etc.
Same in all respects as for the fawn pug, excepting that the coat
should be pure black and entirely free from white.
THE RETRIEVER (BLACK, CURLY-
COATED).

(From Modern Dogs.)

Origin.—The breed is about fifty years old, and probably is a cross


of the old English or Irish water-spaniel with setter, collie, or
Newfoundland.
Uses.—Retrieving wounded game and birds.
* Scale of Points, Etc.
Value.
Skull 10
Nose and jaws 10
Ears and eyes 5
Neck 5
Loins and back 10
Quarters and stifles 5
Shoulders 6
Chest 4
Legs, knees, and hocks 5
Feet 5
Tail 5
Texture of coat and bareness of face 15
Color 5
Symmetry and temperament 10
Total 100
General Appearance.—A strong, smart dog,
moderately low on leg, active, lively, and intelligent.
Head.—Long and narrow for length. Ears rather
small, set low, carried close to head, covered with
short curls. Jaws long, strong, free from lippiness.
Nose black, with wide-open nostrils. Eyes dark,
rather large, showing good temper. Pug eye objectionable.
Coat.—A mass of short, crisp curls from occiput to
point of tail; a saddleback or patch of uncurled hair
behind shoulders.
Color.—Black or liver; a white patch on chest
penalizing.
Neck.—Long, graceful, muscular, free from throatiness.
Shoulders.—Very deep, muscular, obliquely placed; chest not too
wide, but deep; body rather short, well ribbed, and muscular.
Legs and Feet.—Fore legs straight, bone plenty; not
too long, well set under body. Feet round, compact;
toes well arched.
Loins.—Powerful and deep.
Tail.—Carried pretty straight, and covered with
short curls.
Weight.—Dogs, 55 to 68 pounds; bitches, 5 pounds less.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like