100% found this document useful (1 vote)
57 views42 pages

Get The Art of Reinforcement Learning: Fundamentals, Mathematics, and Implementations With Python 1st Edition Michael Hu Free All Chapters

Michael

Uploaded by

hbeichadidja92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
57 views42 pages

Get The Art of Reinforcement Learning: Fundamentals, Mathematics, and Implementations With Python 1st Edition Michael Hu Free All Chapters

Michael

Uploaded by

hbeichadidja92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Download the full version of the ebook at

https://ebookmass.com

The Art of Reinforcement Learning:


Fundamentals, Mathematics, and
Implementations with Python 1st Edition
Michael Hu
https://ebookmass.com/product/the-art-of-
reinforcement-learning-fundamentals-mathematics-
and-implementations-with-python-1st-edition-
michael-hu-2/

Explore and download more ebook at https://ebookmass.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

The Art of Reinforcement Learning: Fundamentals,


Mathematics, and Implementations with Python 1st Edition
Michael Hu
https://ebookmass.com/product/the-art-of-reinforcement-learning-
fundamentals-mathematics-and-implementations-with-python-1st-edition-
michael-hu/
testbankdeal.com

The Art of Reinforcement Learning Michael Hu

https://ebookmass.com/product/the-art-of-reinforcement-learning-
michael-hu/

testbankdeal.com

Learning Scientific Programming with Python Hill

https://ebookmass.com/product/learning-scientific-programming-with-
python-hill/

testbankdeal.com

Reinforcement Learning for Finance: Solve Problems in


Finance with CNN and RNN Using the TensorFlow Library 1st
Edition Samit Ahlawat
https://ebookmass.com/product/reinforcement-learning-for-finance-
solve-problems-in-finance-with-cnn-and-rnn-using-the-tensorflow-
library-1st-edition-samit-ahlawat/
testbankdeal.com
Model-Based Reinforcement Learning Milad Farsi

https://ebookmass.com/product/model-based-reinforcement-learning-
milad-farsi/

testbankdeal.com

Learning Scientific Programming with Python 2nd Edition


Christian Hill

https://ebookmass.com/product/learning-scientific-programming-with-
python-2nd-edition-christian-hill-2/

testbankdeal.com

Learning Scientific Programming With Python 2nd Edition


Christian Hill

https://ebookmass.com/product/learning-scientific-programming-with-
python-2nd-edition-christian-hill/

testbankdeal.com

Python Fundamentals for Finance: A survey of Algorithmic


Options trading with Python Van Der Post

https://ebookmass.com/product/python-fundamentals-for-finance-a-
survey-of-algorithmic-options-trading-with-python-van-der-post/

testbankdeal.com

Mathematics Fundamentals Dr Des Hill

https://ebookmass.com/product/mathematics-fundamentals-dr-des-hill/

testbankdeal.com
Michael Hu

The Art of Reinforcement Learning


Fundamentals, Mathematics, and Implementations
with Python
Michael Hu
Shanghai, Shanghai, China

ISBN 978-1-4842-9605-9 e-ISBN 978-1-4842-9606-6


https://doi.org/10.1007/978-1-4842-9606-6

© Michael Hu 2023

Apress Standard

The use of general descriptive names, registered names, trademarks,


service marks, etc. in this publication does not imply, even in the
absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general
use.

The publisher, the authors, and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Apress imprint is published by the registered company APress


Media, LLC, part of Springer Nature.
The registered company address is: 1 New York Plaza, New York, NY
10004, U.S.A.
To my beloved family,
This book is dedicated to each of you, who have been a constant source of
love and support throughout my writing journey.
To my hardworking parents, whose tireless efforts in raising us have been
truly remarkable. Thank you for nurturing my dreams and instilling in
me a love for knowledge. Your unwavering dedication has played a
pivotal role in my accomplishments.
To my sisters and their children, your presence and love have brought
immense joy and inspiration to my life. I am grateful for the laughter and
shared moments that have sparked my creativity.
And to my loving wife, your consistent support and understanding have
been my guiding light. Thank you for standing by me through the highs
and lows, and for being my biggest cheerleader.
—Michael Hu
Preface
Reinforcement learning (RL) is a highly promising yet challenging
subfield of artificial intelligence (AI) that plays a crucial role in shaping
the future of intelligent systems. From robotics and autonomous agents
to recommendation systems and strategic decision-making, RL enables
machines to learn and adapt through interactions with their
environment. Its remarkable success stories include RL agents
achieving human-level performance in video games and even
surpassing world champions in strategic board games like Go. These
achievements highlight the immense potential of RL in solving complex
problems and pushing the boundaries of AI.
What sets RL apart from other AI subfields is its fundamental
approach: agents learn by interacting with the environment, mirroring
how humans acquire knowledge. However, RL poses challenges that
distinguish it from other AI disciplines. Unlike methods that rely on
precollected training data, RL agents generate their own training
samples. These agents are not explicitly instructed on how to achieve a
goal; instead, they receive state representations of the environment and
a reward signal, forcing them to explore and discover optimal strategies
on their own. Moreover, RL involves complex mathematics that
underpin the formulation and solution of RL problems.
While numerous books on RL exist, they typically fall into two
categories. The first category emphasizes the fundamentals and
mathematics of RL, serving as reference material for researchers and
university students. However, these books often lack implementation
details. The second category focuses on practical hands-on coding of RL
algorithms, neglecting the underlying theory and mathematics. This
apparent gap between theory and implementation prompted us to
create this book, aiming to strike a balance by equally emphasizing
fundamentals, mathematics, and the implementation of successful RL
algorithms.
This book is designed to be accessible and informative for a diverse
audience. It is targeted toward researchers, university students, and
practitioners seeking a comprehensive understanding of RL. By
following a structured approach, the book equips readers with the
necessary knowledge and tools to apply RL techniques effectively in
various domains.
The book is divided into four parts, each building upon the previous
one. Part I focuses on the fundamentals and mathematics of RL, which
form the foundation for almost all discussed algorithms. We begin by
solving simple RL problems using tabular methods. Chapter 2, the
cornerstone of this part, explores Markov decision processes (MDPs)
and the associated value functions, which are recurring concepts
throughout the book. Chapters 3 to 5 delve deeper into these
fundamental concepts by discussing how to use dynamic programming
(DP), Monte Carlo methods, and temporal difference (TD) learning
methods to solve small MDPs.
Part II tackles the challenge of solving large-scale RL problems that
render tabular methods infeasible due to their complexity (e.g., large or
infinite state spaces). Here, we shift our focus to value function
approximation, with particular emphasis on leveraging (deep) neural
networks. Chapter 6 provides a brief introduction to linear value
function approximation, while Chap. 7 delves into the renowned Deep
Q-Network (DQN) algorithm. In Chap. 8, we discuss enhancements to
the DQN algorithm.
Part III explores policy-based methods as an alternative approach to
solving RL problems. While Parts I and II primarily focus on value-
based methods (learning the value function), Part III concentrates on
learning the policy directly. We delve into the theory behind policy
gradient methods and the REINFORCE algorithm in Chap. 9.
Additionally, we explore Actor-Critic algorithms, which combine policy-
based and value-based approaches, in Chap. 10. Furthermore, Chap. 11
covers advanced policy-based algorithms, including surrogate objective
functions and the renowned Proximal Policy Optimization (PPO)
algorithm.
The final part of the book addresses advanced RL topics. Chapter 12
discusses how distributed RL can enhance agent performance, while
Chap. 13 explores the challenges of hard-to-explore RL problems and
presents curiosity-driven exploration as a potential solution. In the
concluding chapter, Chap. 14, we delve into model-based RL by
providing a comprehensive examination of the famous AlphaZero
algorithm.
Unlike a typical hands-on coding handbook, this book does not
primarily focus on coding exercises. Instead, we dedicate our resources
and time to explaining the fundamentals and core ideas behind each
algorithm. Nevertheless, we provide complete source code for all
examples and algorithms discussed in the book. Our code
implementations are done from scratch, without relying on third-party
RL libraries, except for essential tools like Python, OpenAI Gym, Numpy,
and the PyTorch deep learning framework. While third-party RL
libraries expedite the implementation process in real-world scenarios,
we believe coding each algorithm independently is the best approach
for learning RL fundamentals and mastering the various RL algorithms.
Throughout the book, we employ mathematical notations and
equations, which some readers may perceive as heavy. However, we
prioritize intuition over rigorous proofs, making the material accessible
to a broader audience. A foundational understanding of calculus at a
basic college level, minimal familiarity with linear algebra, and
elementary knowledge of probability and statistics are sufficient to
embark on this journey. We strive to ensure that interested readers
from diverse backgrounds can benefit from the book’s content.
We assume that readers have programming experience in Python
since all the source code is written in this language. While we briefly
cover the basics of deep learning in Chap. 7, including neural networks
and their workings, we recommend some prior familiarity with
machine learning, specifically deep learning concepts such as training a
deep neural network. However, beyond the introductory coverage,
readers can explore additional resources and materials to expand their
knowledge of deep learning.
This book draws inspiration from Reinforcement Learning: An
Introduction by Richard S. Sutton and Andrew G. Barto, a renowned RL
publication. Additionally, it is influenced by prestigious university RL
courses, particularly the mathematical style and notation derived from
Professor Emma Brunskill’s RL course at Stanford University. Although
our approach may differ slightly from Sutton and Barto’s work, we
strive to provide simpler explanations. Additionally, we have derived
some examples from Professor David Silver’s RL course at University
College London, which offers a comprehensive resource for
understanding the fundamentals presented in Part I. We would like to
express our gratitude to Professor Dimitri P. Bertsekas for his
invaluable guidance and inspiration in the field of optimal control and
reinforcement learning. Furthermore, the content of this book
incorporates valuable insights from research papers published by
various organizations and individual researchers.
In conclusion, this book aims to bridge the gap between the
fundamental concepts, mathematics, and practical implementation of
RL algorithms. By striking a balance between theory and
implementation, we provide readers with a comprehensive
understanding of RL, empowering them to apply these techniques in
various domains. We present the necessary mathematics and offer
complete source code for implementation to help readers gain a deep
understanding of RL principles. We hope this book serves as a valuable
resource for readers seeking to explore the fundamentals, mathematics,
and practical aspects of RL algorithms. We must acknowledge that
despite careful editing from our editors and multiple round of reviews,
we cannot guarantee the book’s content is error free. Your feedback and
corrections are invaluable to us. Please do not hesitate to contact us
with any concerns or suggestions for improvement.

Source Code
You can download the source code used in this book from github.​com/​
apress/​art-of-reinforcement-lear ning.
Michael Hu
Any source code or other supplementary material referenced by the
author in this book is available to readers on GitHub (https://​github.​
com/​Apress). For more detailed information, please visit https://​www.​
apress.​com/​gp/​services/​source-code.
Visit https://ebookmass.com
now to explore a rich
collection of eBooks and enjoy
exciting offers!
Contents
Part I Foundation
1 Introduction
1.​1 AI Breakthrough in Games
1.​2 What Is Reinforcement Learning
1.​3 Agent-Environment in Reinforcement Learning
1.​4 Examples of Reinforcement Learning
1.​5 Common Terms in Reinforcement Learning
1.​6 Why Study Reinforcement Learning
1.​7 The Challenges in Reinforcement Learning
1.​8 Summary
References
2 Markov Decision Processes
2.​1 Overview of MDP
2.​2 Model Reinforcement Learning Problem Using MDP
2.​3 Markov Process or Markov Chain
2.​4 Markov Reward Process
2.​5 Markov Decision Process
2.​6 Alternative Bellman Equations for Value Functions
2.​7 Optimal Policy and Optimal Value Functions
2.​8 Summary
References
3 Dynamic Programming
3.​1 Use DP to Solve MRP Problem
3.​2 Policy Evaluation
3.​3 Policy Improvement
3.​4 Policy Iteration
3.​5 General Policy Iteration
3.​6 Value Iteration
3.​7 Summary
References
4 Monte Carlo Methods
4.​1 Monte Carlo Policy Evaluation
4.​2 Incremental Update
4.​3 Exploration vs.​Exploitation
4.​4 Monte Carlo Control (Policy Improvement)
4.​5 Summary
References
5 Temporal Difference Learning
5.​1 Temporal Difference Learning
5.​2 Temporal Difference Policy Evaluation
5.3 Simplified 𝜖-Greedy Policy for Exploration
5.​4 TD Control—SARSA
5.​5 On-Policy vs.​Off-Policy
5.​6 Q-Learning
5.​7 Double Q-Learning
5.​8 N-Step Bootstrapping
5.​9 Summary
References
Part II Value Function Approximation
6 Linear Value Function Approximation
6.​1 The Challenge of Large-Scale MDPs
6.​2 Value Function Approximation
6.​3 Stochastic Gradient Descent
6.​4 Linear Value Function Approximation
6.​5 Summary
References
7 Nonlinear Value Function Approximation
7.​1 Neural Networks
7.​2 Training Neural Networks
7.​3 Policy Evaluation with Neural Networks
7.​4 Naive Deep Q-Learning
7.​5 Deep Q-Learning with Experience Replay and Target
Network
7.​6 DQN for Atari Games
7.​7 Summary
References
8 Improvements to DQN
8.​1 DQN with Double Q-Learning
8.​2 Prioritized Experience Replay
8.​3 Advantage function and Dueling Network Architecture
8.​4 Summary
References
Part III Policy Approximation
9 Policy Gradient Methods
9.​1 Policy-Based Methods
9.​2 Policy Gradient
9.​3 REINFORCE
9.​4 REINFORCE with Baseline
9.​5 Actor-Critic
9.​6 Using Entropy to Encourage Exploration
9.​7 Summary
References
10 Problems with Continuous Action Space
10.​1 The Challenges of Problems with Continuous Action Space
10.​2 MuJoCo Environments
10.​3 Policy Gradient for Problems with Continuous Action
Space
10.​4 Summary
References
11 Advanced Policy Gradient Methods
11.​1 Problems with the Standard Policy Gradient Methods
11.​2 Policy Performance Bounds
11.​3 Proximal Policy Optimization
11.​4 Summary
References
Part IV Advanced Topics
12 Distributed Reinforcement Learning
12.​1 Why Use Distributed Reinforcement Learning
12.​2 General Distributed Reinforcement Learning Architecture
12.​3 Data Parallelism for Distributed Reinforcement Learning
12.​4 Summary
References
13 Curiosity-Driven Exploration
13.​1 Hard-to-Explore Problems vs.​Sparse Reward Problems
13.​2 Curiosity-Driven Exploration
13.​3 Random Network Distillation
13.​4 Summary
References
14 Planning with a Model:​AlphaZero
14.​1 Why We Need to Plan in Reinforcement Learning
14.​2 Monte Carlo Tree Search
14.​3 AlphaZero
14.​4 Training AlphaZero on a 9 × 9 Go Board
14.​5 Training AlphaZero on a 13 × 13 Gomoku Board
14.​6 Summary
References
Index
About the Author
Michael Hu
is an exceptional software engineer with a wealth of
expertise spanning over a decade, specializing in the
design and implementation of enterprise-level
applications. His current focus revolves around leveraging
the power of machine learning (ML) and artificial
intelligence (AI) to revolutionize operational systems
within enterprises. A true coding enthusiast, Michael finds
solace in the realms of mathematics and continuously
explores cutting-edge technologies, particularly machine learning and
deep learning. His unwavering passion lies in the realm of deep
reinforcement learning, where he constantly seeks to push the
boundaries of knowledge. Demonstrating his commitment to the field,
he has built various numerous open source projects on GitHub that
closely emulate state-of-the-art reinforcement learning algorithms
pioneered by DeepMind, including notable examples like AlphaZero,
MuZero, and Agent57. Through these projects, Michael demonstrates
his commitment to advancing the field and sharing his knowledge with
fellow enthusiasts. He currently resides in the city of Shanghai, China.
About the Technical Reviewer
Shovon Sengupta
has over 14 years of expertise and a deepened
understanding of advanced predictive analytics, machine
learning, deep learning, and reinforcement learning. He
has established a place for himself by creating innovative
financial solutions that have won numerous awards. He is
currently working for one of the leading multinational
financial services corporations in the United States as the
Principal Data Scientist at the AI Center of Excellence. His job entails
leading innovative initiatives that rely on artificial intelligence to
address challenging business problems. He has a US patent (United
States Patent: Sengupta et al.: Automated Predictive Call Routing Using
Reinforcement Learning [US 10,356,244 B1]) to his credit. He is also a
Ph.D. scholar at BITS Pilani. He has reviewed quite a few popular titles
from leading publishers like Packt and Apress and has also authored a
few courses for Packt and CodeRed (EC-Council) in the realm of
machine learning. Apart from that, he has presented at various
international conferences on machine learning, time series forecasting,
and building trustworthy AI. His primary research is concentrated on
deep reinforcement learning, deep learning, natural language
processing (NLP), knowledge graph, causality analysis, and time series
analysis. For more details about Shovon’s work, please check out his
LinkedIn page: www.​linkedin.​com/​in/​shovon-sengupta-272aa917.
Part I
Foundation
© The Author(s), under exclusive license to APress Media, LLC, part of Springer
Nature 2023
M. Hu, The Art of Reinforcement Learning
https://doi.org/10.1007/978-1-4842-9606-6_1

1. Introduction
Michael Hu1
(1) Shanghai, Shanghai, China

Artificial intelligence has made impressive progress in recent years,


with breakthroughs achieved in areas such as image recognition,
natural language processing, and playing games. In particular,
reinforcement learning, a type of machine learning that focuses on
learning by interacting with an environment, has led to remarkable
achievements in the field.
In this book, we focus on the combination of reinforcement learning
and deep neural networks, which have become central to the success of
agents that can master complex games such as board game Go and Atari
video games.
This first chapter provides an overview of reinforcement learning,
including key concepts such as states, rewards, policies, and the
common terms used in reinforcement learning, like the difference
between episodic and continuing reinforcement learning problems,
model-free vs. model-based methods.
Despite the impressive progress in the field, reinforcement learning
still faces significant challenges. For example, it can be difficult to learn
from sparse rewards, and the methods can suffer from instability.
Additionally, scaling to large state and action spaces can be a challenge.
Throughout this book, we will explore these concepts in greater
detail and discuss state-of-the-art techniques used to address these
challenges. By the end of this book, you will have a comprehensive
Visit https://ebookmass.com
now to explore a rich
collection of eBooks and enjoy
exciting offers!
understanding of the principles of reinforcement learning and how they
can be applied to real-world problems.
We hope this introduction has sparked your curiosity about the
potential of reinforcement learning, and we invite you to join us on this
journey of discovery.

Fig. 1.1 A DQN agent learning to play Atari’s Breakout. The goal of the game is to
use a paddle to bounce a ball up and break through a wall of bricks. The agent only
takes in the raw pixels from the screen, and it has to figure out what’s the right action
to take in order to maximize the score. Idea adapted from Mnih et al. [1]. Game
owned by Atari Interactive, Inc.

1.1 AI Breakthrough in Games


Atari
The Atari 2600 is a home video game console developed by Atari
Interactive, Inc. in the 1970s. It features a collection of iconic video
games. These games, such as Pong, Breakout, Space Invaders, and Pac-
Man, have become classic examples of early video gaming culture. In
this platform, players can interact with these classic games using a
joystick controller.
The breakthrough in Atari games came in 2015 when Mnih et al. [1]
from DeepMind developed an AI agent called DQN to play a list of Atari
video games, some even better than humans.
What makes the DQN agent so influential is how it was trained to
play the game. Similar to a human player, the agent was only given the
raw pixel image of the screen as inputs, as illustrated in Fig. 1.1, and it
has to figure out the rules of the game all by itself and decide what to do
during the game to maximize the score. No human expert knowledge,
such as predefined rules or sample games of human play, was given to
the agent.
The DQN agent is a type of reinforcement learning agent that learns
by interacting with an environment and receiving a reward signal. In
the case of Atari games, the DQN agent receives a score for each action
it takes.
Mnih et al. [1] trained and tested their DQN agents on 57 Atari video
games. They trained one DQN agent for one Atari game, with each agent
playing only the game it was trained on; the training was over millions
of frames. The DQN agent can play half of the games (30 of 57 games) at
or better than a human player, as shown by Mnih et al. [1]. This means
that the agent was able to learn and develop strategies that were better
than what a human player could come up with.
Since then, various organizations and researchers have made
improvements to the DQN agent, incorporating several new techniques.
The Atari video games have become one of the most used test beds for
evaluating the performance of reinforcement learning agents and
algorithms. The Arcade Learning Environment (ALE) [2], which
provides an interface to hundreds of Atari 2600 game environments, is
commonly used by researchers for training and testing reinforcement
learning agents.
In summary, the Atari video games have become a classic example
of early video gaming culture, and the Atari 2600 platform provides a
rich environment for training agents in the field of reinforcement
learning. The breakthrough of DeepMind’s DQN agent, trained and
tested on 57 Atari video games, demonstrated the capability of an AI
agent to learn and make decisions through trial-and-error interactions
with classic games. This breakthrough has spurred many
improvements and advancements in the field of reinforcement learning,
and the Atari games have become a popular test bed for evaluating the
performance of reinforcement learning algorithms.

Go
Go is an ancient Chinese strategy board game played by two players,
who take turns laying pieces of stones on a 19x19 board with the goal
of surrounding more territory than the opponent. Each player has a set
of black or white stones, and the game begins with an empty board.
Random documents with unrelated
content Scribd suggests to you:
A limited edition of 525 sets was printed, of which only 19 remain
for sale. Complete in 6 volumes, small quarto, handsomely printed,
and bound in blue cloth. Price $50.00, net.

“Indispensable to every future historian of the Seven Years’ War in America....


The cartography of the campaign has been largely supplemented by Mr.
Doughty’s discoveries.... The mechanical features of these volumes deserve high
praise.”—New York Evening Post.
“Merits the thanks of all those interested in probably the most famous incident
of our history.”—Sir John G. Bourinot, K.C.M.G., LL.D., Litt.D.
“A hundred and one writers have treated this well-worn subject, but it has
been left for Messrs. Doughty and Parmelee to go over the whole ground and
present us with a final and authoritative record.”—The Daily Chronicle, London,
England.

“The bare title hardly conveys an idea of the interesting lore embraced in this
admirably carried out study of the roads and their part in the development of the
country.”—Boston Globe.

The Historic Highways of America


by Archer Butler Hulbert
A series of monographs on the History of America as portrayed in
the evolution of its highways of War, Commerce, and Social
Expansion.
Comprising the following volumes:

I—Paths of the Mound-Building Indians and Great Game


Animals.
II—Indian Thoroughfares.
III—Washington’s Road: The First Chapter of the Old French
War.
IV—Braddock’s Road.
V—The Old Glade (Forbes’s) Road.
VI—Boone’s Wilderness Road.
VII—Portage Paths: The Keys of the Continent
VIII—Military Roads of the Mississippi Basin.
IX—Waterways of Westward Expansion.
X—The Cumberland Road.
XI, XII—Pioneer Roads of America, two volumes.
XIII, XIV—The Great American Canals, two volumes.
XV—The Future of Road-Making in America.
XVI—Index.

Sixteen volumes, crown 8vo, cloth, uncut, gilt tops. A limited


edition only printed direct from type, and the type distributed. Each
volume handsomely printed in large type on Dickinson’s hand-made
paper, and illustrated with maps, plates, and facsimiles.
Published a volume each two months, beginning September,
1902.
Price, volumes 1 and 2, $2.00 net each; volumes 3 to 16, $2.50
net each.
Fifty sets printed on large paper, each numbered and signed by the
author. Bound in cloth, with paper label, uncut, gilt tops. Price,
$5.00 net per volume.
“The history of American trails and carries in colonial times; of paths, roads,
and highways in our national beginnings; and of our great lake, river, and
railroad traffic in later times is and has been of the first importance in our social
and political history. Mr. Hulbert has shown himself abundantly able to investigate
the subject and put in good form the results of his labors.”—Professor William M.
Sloane, Princeton University.
“Mr. Hulbert has evidently mastered his subject, and has treated it very ably
and enthusiastically. History is too frequently a mere collection of dry bones, but
here we have a book which, when once begun, will be read eagerly to the end,
so vividly does the author bring scenes and personages before us.”—Current
Literature.
“As in the prior volumes, the general effect is that of a most entertaining
series. The charm of the style is evident.”—American Historical Review.
“His style is effective ... an invaluable contribution to the makings of American
History.”—New York Evening Post.
“Should fill an important and unoccupied place in American historical
literature.”—The Dial.
*** END OF THE PROJECT GUTENBERG EBOOK EARLY WESTERN
TRAVELS, 1748-1846, VOLUME 1 ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute


this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you


derive from the use of Project Gutenberg™ works
calculated using the method you already use to calculate
your applicable taxes. The fee is owed to the owner of the
Project Gutenberg™ trademark, but he has agreed to
donate royalties under this paragraph to the Project
Gutenberg Literary Archive Foundation. Royalty payments
must be paid within 60 days following each date on which
you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly
marked as such and sent to the Project Gutenberg Literary
Archive Foundation at the address specified in Section 4,
“Information about donations to the Project Gutenberg
Literary Archive Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of
receipt that s/he does not agree to the terms of the full
Project Gutenberg™ License. You must require such a user
to return or destroy all copies of the works possessed in a
physical medium and discontinue all use of and all access to
other copies of Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full


refund of any money paid for a work or a replacement
copy, if a defect in the electronic work is discovered and
reported to you within 90 days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.

You might also like