100% found this document useful (1 vote)
54 views

Practical MATLAB Deep Learning: A Projects-Based Approach, 2nd Edition Michael Paluszek instant download

The document promotes various ebooks available for download on ebookmass.com, including titles focused on MATLAB deep learning, machine learning recipes, and Python-based deep learning. It highlights the second edition of 'Practical MATLAB Deep Learning' by Michael Paluszek, which includes new chapters on processing Earth sensor data, generative deep learning, and reinforcement learning. Additionally, it provides links to other related titles and mentions the authors and contributors involved in the book's creation.

Uploaded by

zyfidabal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
54 views

Practical MATLAB Deep Learning: A Projects-Based Approach, 2nd Edition Michael Paluszek instant download

The document promotes various ebooks available for download on ebookmass.com, including titles focused on MATLAB deep learning, machine learning recipes, and Python-based deep learning. It highlights the second edition of 'Practical MATLAB Deep Learning' by Michael Paluszek, which includes new chapters on processing Earth sensor data, generative deep learning, and reinforcement learning. Additionally, it provides links to other related titles and mentions the authors and contributors involved in the book's creation.

Uploaded by

zyfidabal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Visit https://ebookmass.

com to download the full version and


browse more ebooks or textbooks

Practical MATLAB Deep Learning: A Projects-Based


Approach, 2nd Edition Michael Paluszek

_____ Press the link below to begin your download _____

https://ebookmass.com/product/practical-matlab-deep-
learning-a-projects-based-approach-2nd-edition-michael-
paluszek/

Access ebookmass.com now to download high-quality


ebooks or textbooks
We have selected some products that you may be interested in
Click the link to download now or visit ebookmass.com
for more options!.

MATLAB Machine Learning Recipes: A Problem-Solution


Approach 3rd Edition Michael Paluszek

https://ebookmass.com/product/matlab-machine-learning-recipes-a-
problem-solution-approach-3rd-edition-michael-paluszek/

Beginning Anomaly Detection Using Python-Based Deep


Learning, 2nd Edition Suman Kalyan Adari

https://ebookmass.com/product/beginning-anomaly-detection-using-
python-based-deep-learning-2nd-edition-suman-kalyan-adari/

Practical Radiology: A Symptom-Based Approach 1st Edition

https://ebookmass.com/product/practical-radiology-a-symptom-based-
approach-1st-edition/

Practical Rust Projects: Build Serverless, AI, Machine


Learning, Embedded, Game, and Web Applications (2nd ed.)
2nd Edition Shing Lyu
https://ebookmass.com/product/practical-rust-projects-build-
serverless-ai-machine-learning-embedded-game-and-web-applications-2nd-
ed-2nd-edition-shing-lyu/
Neuroanesthesia: A Problem-Based Learning Approach David E
Traul

https://ebookmass.com/product/neuroanesthesia-a-problem-based-
learning-approach-david-e-traul/

Go Crazy: A Fun Projects-based Approach to Golang


Programming 1st Edition Nicolas Modrzyk

https://ebookmass.com/product/go-crazy-a-fun-projects-based-approach-
to-golang-programming-1st-edition-nicolas-modrzyk/

Pro Deep Learning with TensorFlow 2.0: A Mathematical


Approach to Advanced Artificial Intelligence in Python 2nd
Edition Santanu Pattanayak
https://ebookmass.com/product/pro-deep-learning-with-
tensorflow-2-0-a-mathematical-approach-to-advanced-artificial-
intelligence-in-python-2nd-edition-santanu-pattanayak/

Practical GraphQL: Learning Full-Stack GraphQL Development


with Projects 1st Edition Nabendu Biswas

https://ebookmass.com/product/practical-graphql-learning-full-stack-
graphql-development-with-projects-1st-edition-nabendu-biswas/

Risk Modeling: Practical Applications of Artificial


Intelligence, Machine Learning, and Deep Learning Terisa
Roberts
https://ebookmass.com/product/risk-modeling-practical-applications-of-
artificial-intelligence-machine-learning-and-deep-learning-terisa-
roberts/
Michael Paluszek, Stephanie Thomas and Eric Ham

Practical MATLAB Deep Learning


A Projects-Based Approach
2nd ed.
Michael Paluszek
Plainsboro, NJ, USA

Stephanie Thomas
Princeton, NJ, USA

Eric Ham
Princeton, NJ, USA

ISBN 978-1-4842-7911-3 e-ISBN 978-1-4842-7912-0


https://doi.org/10.1007/978-1-4842-7912-0

© Michael Paluszek, Stephanie Thomas, Eric Ham 2022

This work is subject to copyright. All rights are solely and exclusively
licensed by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in
any other physical way, and transmission or information storage and
retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks,


service marks, etc. in this publication does not imply, even in the
absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general
use.

The publisher, the authors, and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Apress imprint is published by the registered company APress


Media, LLC, part of Springer Nature.
The registered company address is: 1 New York Plaza, New York, NY
10004, U.S.A.
Preface to the Second Edition
Practical MATLAB Deep Learning, Second Edition, is an extension of the
first edition of this book. We have added three new chapters. One
shows how deep learning can be applied to the problem of processing
static Earth sensor data in low Earth orbit. Many new satellites use
Earth sensors. This work shows how you can process data and also use
deep learning to evaluate sensors.
The second new chapter is on generative deep learning. This shows
how neural networks can be used to generate new data. When a neural
network can recognize objects, it has, in its neurons, a model of the
subject that allows it to recognize objects it has not seen before. Given
this information, a neural network can also create new data. This
chapter shows you how. This chapter lets you create a generative deep
learning network that generates music.
The final new chapter is on reinforcement learning. Reinforcement
learning is a machine learning approach in which an intelligent agent
learns to take actions to maximize a reward. We will apply this to the
design of a Titan landing control system. Reinforcement learning is a
tool to approximate solutions that could have been obtained by
dynamic programming, but whose exact solutions are computationally
intractable. In this chapter, we derive a model for a Titan lander. We use
optimization to come up with a trajectory and then show how
reinforcement learning can achieve similar results.
This book was written using several different revisions to MATLAB
and its toolboxes. When you replicate the demonstrations, you may
notice that some of the GUIs are different. This should not pose a
problem with the code that is supplied with the book. This code was
tested with R2022a.
Any source code or other supplementary material referenced by the
author in this book is available to readers on GitHub. For more detailed
information, please visit http://www.apress.com/source-code.
Acknowledgments
Thanks to Shannen Prindle for helping with the Chapter 7 experiment
and doing all of the photography for Chapter 7. Shannen is a Princeton
University student who worked as an intern at Princeton Satellite
Systems in the summer of 2019. We would also like to thank Dr. Charles
Swanson for reviewing Chapter 6 on Tokamak control. Thanks to
Kestras Subacius of the MathWorks for tech support on the Bluetooth
device. We would also like to thank Matt Halpin for reading the book
from front to back. We would like to thank Zaid Zada for his
contributions to the chapter on generative deep learning. In particular,
we would like to thank Julia Hoerner of the MathWorks for her detailed
review of the entire book. She made many excellent suggestions!
Thanks also to Dr. Christopher Galea for his help on the Tokamak
chapter. We would also like to thank Sam Lehman, Sidhant Shenoy,
Emmanouil Tzorako for their support.
We would like to thank dancers Shaye Firer, Emily Parker,
(Ryoko Tanaka), and Matanya Solomon for being our experimental
subjects in this book. We would also like to thank the American
Repertory Ballet and Executive Director Julie Hench for hosting our
Chapter 7 experiment.
Contents
1 What Is Deep Learning?​
1.​1 Deep Learning
1.​2 History of Deep Learning
1.​3 Neural Nets
1.​3.​1 Daylight Detector
1.​3.​2 XOR Neural Net
1.​4 Deep Learning and Data
1.​5 Types of Deep Learning
1.​5.​1 Multi-layer Neural Network
1.​5.​2 Convolutional Neural Network (CNN)
1.​5.​3 Recurrent Neural Network (RNN)
1.​5.​4 Long Short-Term Memory Network (LSTM)
1.​5.​5 Recursive Neural Network
1.​5.​6 Temporal Convolutional Machine (TCM)
1.​5.​7 Stacked Autoencoders
1.​5.​8 Extreme Learning Machine (ELM)
1.​5.​9 Recursive Deep Learning
1.​5.​10 Generative Deep Learning
1.​5.​11 Reinforcement Learning
1.​6 Applications of Deep Learning
1.​7 Organization of the Book
2 MATLAB Toolboxes
2.​1 Commercial MATLAB Software
2.​1.​1 MathWorks Products
2.​2 MATLAB Open Source
2.​3 XOR Example
2.​4 Training
2.​5 Zermelo’s Problem
3 Finding Circles
3.​1 Introduction
3.​2 Structure
3.​2.​1 imageInputLayer
3.​2.​2 convolution2dLay​er
3.​2.​3 batchNormalizati​onLayer
3.​2.​4 reluLayer
3.​2.​5 maxPooling2dLaye​r
3.​2.​6 fullyConnectedLa​yer
3.​2.​7 softmaxLayer
3.​2.​8 classificationLa​yer
3.​2.​9 Structuring the Layers
3.​3 Generating Data
3.​3.​1 Problem
3.​3.​2 Solution
3.​3.​3 How It Works
3.​4 Training and Testing
3.​4.​1 Problem
3.​4.​2 Solution
3.​4.​3 How It Works
4 Classifying Movies
4.​1 Introduction
4.​2 Generating a Movie Database
4.​2.​1 Problem
4.​2.​2 Solution
4.​2.​3 How It Works
4.​3 Generating a Viewer Database
4.​3.​1 Problem
4.​3.​2 Solution
4.​3.​3 How It Works
4.​4 Training and Testing
4.​4.​1 Problem
4.​4.​2 Solution
4.​4.​3 How It Works
5 Algorithmic Deep Learning
5.​1 Building the Filter
5.​1.​1 Problem
5.​1.​2 Solution
5.​1.​3 How It Works
5.​2 Simulating
5.​2.​1 Problem
5.​2.​2 Solution
5.​2.​3 How It Works
5.​3 Testing and Training
5.​3.​1 Problem
5.​3.​2 Solution
5.​3.​3 How It Works
6 Tokamak Disruption Detection
6.​1 Introduction
6.​2 Numerical Model
6.​2.​1 Dynamics
6.​2.​2 Sensors
6.​2.​3 Disturbances
6.​2.​4 Controller
6.​3 Dynamical Model
6.​3.​1 Problem
6.​3.​2 Solution
6.​3.​3 How It Works
6.​4 Simulate the Plasma
6.​4.​1 Problem
6.​4.​2 Solution
6.​4.​3 How It Works
6.​5 Control the Plasma
6.​5.​1 Problem
6.​5.​2 Solution
6.​5.​3 How It Works
6.​6 Training and Testing
6.​6.​1 Problem
6.​6.​2 Solution
6.​6.​3 How It Works
7 Classifying a Pirouette
7.​1 Introduction
7.​1.​1 Inertial Measurement Unit
7.​1.​2 Physics
7.​2 Data Acquisition
7.​2.​1 Problem
7.​2.​2 Solution
7.​2.​3 How It Works
7.​3 Orientation
7.​3.​1 Problem
7.​3.​2 Solution
7.​3.​3 How It Works
7.​4 Dancer Simulation
7.​4.​1 Problem
7.​4.​2 Solution
7.​4.​3 How It Works
7.​5 Real-Time Plotting
7.​5.​1 Problem
7.​5.​2 Solution
7.​5.​3 How It Works
7.​6 Quaternion Display
7.​6.​1 Problem
7.​6.​2 Solution
7.​6.​3 How It Works
7.​7 Making the IMU Belt
7.​7.​1 Problem
7.​7.​2 Solution
7.​7.​3 How It Works
7.​8 Testing the System
7.​8.​1 Problem
7.​8.​2 Solution
7.​8.​3 How It Works
7.​9 Classifying the Pirouette
7.​9.​1 Problem
7.​9.​2 Solution
7.​9.​3 How It Works
7.​10 Data Acquisition GUI
7.​10.​1 Problem
7.​10.​2 Solution
7.​10.​3 How It Works
7.​11 Hardware Sources
8 Completing Sentences
8.​1 Introduction
8.​1.​1 Sentence Completion
8.​1.​2 Grammar
8.​1.​3 Sentence Completion by Pattern Recognition
8.​1.​4 Sentence Generation
8.​2 Generating a Database
8.​2.​1 Problem
8.​2.​2 Solution
8.​2.​3 How It Works
8.​3 Creating a Numeric Dictionary
8.​3.​1 Problem
8.​3.​2 Solution
8.​3.​3 How It Works
8.​4 Mapping Sentences to Numbers
8.​4.​1 Problem
8.​4.​2 Solution
8.​4.​3 How It Works
8.​5 Converting the Sentences
8.​5.​1 Problem
8.​5.​2 Solution
8.​5.​3 How It Works
8.​6 Training and Testing
8.​6.​1 Problem
8.​6.​2 Solution
8.​6.​3 How It Works
9 Terrain-Based Navigation
9.​1 Introduction
9.​2 Modeling Our Aircraft
9.​2.​1 Problem
9.​2.​2 Solution
9.​2.​3 How It Works
9.​3 Generating Terrain
9.​3.​1 Problem
9.​3.​2 Solution
9.​3.​3 How It Works
9.​4 Close-Up Terrain
9.​4.​1 Problem
9.​4.​2 Solution
9.​4.​3 How It Works
9.​5 Building the Camera Model
9.​5.​1 Problem
9.​5.​2 Solution
9.​5.​3 How It Works
9.​6 Plotting the Trajectory
9.​6.​1 Problem
9.​6.​2 Solution
9.​6.​3 How It Works
9.​7 Creating the Training Images
9.​7.​1 Problem
9.​7.​2 Solution
9.​7.​3 How It Works
9.​8 Training and Testing
9.​8.​1 Problem
9.​8.​2 Solution
9.​8.​3 How It Works
9.​9 Simulation
9.​9.​1 Problem
9.​9.​2 Solution
9.​9.​3 How It Works
10 Stock Prediction
10.​1 Introduction
10.​2 Generating a Stock Market
10.​2.​1 Problem
10.​2.​2 Solution
10.​2.​3 How It Works
10.​3 Creating a Stock Market
10.​3.​1 Problem
10.​3.​2 Solution
10.​3.​3 How It Works
10.​4 Training and Testing
10.​4.​1 Problem
10.​4.​2 Solution
10.​4.​3 How It Works
11 Image Classification
11.​1 Introduction
11.​2 Using AlexNet
11.​2.​1 Problem
11.​2.​2 Solution
11.​2.​3 How It Works
11.​3 Using GoogLeNet
11.​3.​1 Problem
11.​3.​2 Solution
11.​3.​3 How It Works
12 Orbit Determination
12.​1 Introduction
12.​2 Generating the Orbits
12.​2.​1 Problem
12.​2.​2 Solution
12.​2.​3 How It Works
12.​3 Training and Testing
12.​3.​1 Problem
12.​3.​2 Solution
12.​3.​3 How It Works
12.​4 Implementing an LSTM
12.​4.​1 Problem
12.​4.​2 Solution
12.​4.​3 How It Works
13 Earth Sensors
13.​1 Introduction
13.​2 Linear Output Earth Sensor
13.​2.​1 Problem
13.​2.​2 Solution
13.​2.​3 How It Works
13.​3 Segmented Earth Sensor
13.​3.​1 Problem
13.​3.​2 Solution
13.​3.​3 How It Works
13.​4 Linear Output Sensor Neural Network
13.​4.​1 Problem
13.​4.​2 Solution
13.​4.​3 How It Works
13.​5 Segmented Sensor Neural Network
13.​5.​1 Problem
13.​5.​2 Solution
13.​5.​3 How It Works
14 Generative Modeling of Music
14.​1 Introduction
14.​2 Generative Modeling Description
14.​3 Problem:​Music Generation
14.​4 Solution
14.​5 Implementation
14.​6 Alternative Methods
15 Reinforcement Learning
15.​1 Introduction
15.​2 Titan Lander
15.​3 Titan Atmosphere
15.​3.​1 Problem
15.​3.​2 Solution
15.​3.​3 How It Works
15.​4 Simulating the Aircraft
15.​4.​1 Problem
15.​4.​2 Solution
15.​4.​3 How It Works
15.​5 Simulating Level Flight
15.​5.​1 Problem
15.​5.​2 Solution
15.​5.​3 How It Works
15.​6 Optimal Trajectory
15.​6.​1 Problem
15.​6.​2 Solution
15.​6.​3 How It Works
15.​7 Reinforcement Example
15.​7.​1 Problem
15.​7.​2 Solution
15.​7.​3 How It Works
Bibliography
Index
About the Author
Michael Paluszek
is President of Princeton Satellite
Systems, Inc. (PSS) in Plainsboro, New
Jersey. Mr. Paluszek founded PSS in 1992
to provide aerospace consulting services.
He used MATLAB to develop the control
system and simulations for the IndoStar-
1 geosynchronous communications
satellite. This led to the launch of
Princeton Satellite Systems’ first
commercial MATLAB toolbox, the
Spacecraft Control Toolbox, in 1995.
Since then, he has developed toolboxes
and software packages for aircraft,
submarines, robotics, and nuclear fusion propulsion, resulting in
Princeton Satellite Systems’ current extensive product line. He is
working with the Princeton Plasma Physics Laboratory on a compact
nuclear fusion reactor for energy generation and space propulsion.
Before founding PSS, Mr. Paluszek was an engineer at GE Astro
Space in East Windsor, NJ. At GE, he designed the Global Geospace
Sciences Polar despun platform control system and led the design of the
GPS IIR attitude control system, the Inmarsat-3 attitude control system,
and the Mars Observer delta-V control system, leveraging MATLAB for
control design. Mr. Paluszek also worked on the attitude determination
system for the DMSP meteorological satellites. Mr. Paluszek flew
communication satellites on over 12 satellite launches, including the
GSTAR III recovery, the first transfer of a satellite to an operational orbit
using electric thrusters. At Draper Laboratory, Mr. Paluszek worked on
the Space Shuttle, early space station, and submarine navigation. His
space station work included designing Control Moment Gyro–based
control systems for attitude control.
Mr. Paluszek received his bachelor’s degree in Electrical Engineering
and master’s and engineer’s degrees in Aeronautics and Astronautics
from the Massachusetts Institute of Technology. He is the author of
numerous papers and has over a dozen US patents. Mr. Paluszek is the
coauthor of MATLAB Recipes, MATLAB Machine Learning, and MATLAB
Machine Learning Recipes: A Problem-Solution Approach, all published
by Apress.

Stephanie Thomas
is Vice President of Princeton Satellite
Systems, Inc. in Plainsboro, New Jersey.
She received her bachelor’s and master’s
degrees in Aeronautics and Astronautics
from the Massachusetts Institute of
Technology in 1999 and 2001. Ms.
Thomas was introduced to the PSS
Spacecraft Control Toolbox for MATLAB
during a summer internship in 1996 and
has been using MATLAB for aerospace
analysis ever since. In her 20 years of
MATLAB experience, she has developed
many software tools including the Solar
Sail Module for the Spacecraft Control
Toolbox; a proximity satellite operations
toolbox for the Air Force; collision
monitoring Simulink blocks for the Prisma satellite mission; and launch
vehicle analysis tools in MATLAB and Java. She has developed novel
methods for space situation assessment such as a numeric approach to
assessing the general rendezvous problem between any two satellites
implemented in both MATLAB and C++. Ms. Thomas has contributed to
PSS’ Spacecraft Attitude and Orbit Control textbook, featuring examples
using the Spacecraft Control Toolbox, and written many software users’
guides. She has conducted SCT training for engineers from diverse
locales such as Australia, Canada, Brazil, and Thailand and has
performed MATLAB consulting for NASA, the Air Force, and the
European Space Agency. Ms. Thomas is the coauthor of MATLAB
Recipes, MATLAB Machine Learning, and MATLAB Machine Learning
Recipes: A Problem-Solution Approach, published by Apress. In 2016,
Ms. Thomas was named a NASA NIAC Fellow for the project “Fusion-
Enabled Pluto Orbiter and Lander.”

Eric Ham
is an Electrical Engineer and Computer
Scientist at Princeton Satellite Systems in
Plainsboro, New Jersey. He has a BS in
Electrical Engineering with certificates
in Applications of Computing and
Robotics and Intelligent Systems from
Princeton University, 2019. At PSS, Mr.
Ham is working on developing neural
networks for terrain relative navigation
for a lunar lander under a NASA contract.
He is simultaneously working as a
research specialist with the Hasson Lab
at Princeton University’s Princeton
Neuroscience Institute. He is involved in
the design and testing of temporal
convolutional neural networks (TCNs) to
model semantic processing in the brain.
He developed a pipeline for automatic transcription of audio data using
Google’s speech-to-text API. He assisted in the development of a
method for sequence prediction that was inspired by Viterbi’s
algorithm.
His undergraduate research was on implementing an SDRAM for a
novel neuro-computing chip. Mr. Ham did a summer internship at
Princeton University in 2018, in which he worked on a novel path
selection algorithm to improve the security of the Tor onion router. He
worked at the Princeton Plasma Physics Laboratory in 2017 on a high-
efficiency Class-E RF amplifier for nuclear fusion plasma heating.
About the Technical Reviewers
Dr. Joseph Mueller
specializes in control systems and
trajectory optimization. For his doctoral
thesis, he developed optimal ascent
trajectories for stratospheric airships.
His active research interests include
robust optimal control, adaptive control,
applied optimization and planning for
decision support systems, and intelligent
systems to enable autonomous
operations of robotic vehicles. Prior to
joining SIFT in early 2014, Dr. Joseph
worked at Princeton Satellite Systems for
13 years. In that time, he served as the
principal investigator for eight Small Business Innovation Research
contracts for NASA, Air Force, Navy, and MDA. He has developed
algorithms for optimal guidance and control of both formation flying
spacecraft and high-altitude airships and developed a course of action
planning tool for DoD communication satellites. In support of a
research study for NASA Goddard Space Flight Center in 2005, Dr.
Joseph developed the Formation Flying Toolbox for MATLAB, a
commercial product that is now used at NASA, ESA, and several
universities and aerospace companies around the world. In 2006, he
developed the safe orbit guidance mode algorithms and software for
the Swedish Prisma mission, which has successfully flown a two-
spacecraft formation flying mission since its launch in 2010. Dr. Joseph
also serves as an adjunct professor in the Aerospace Engineering and
Mechanics Department at the University of Minnesota, Twin Cities
campus.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer
Nature 2022
M. Paluszek et al., Practical MATLAB Deep Learning
https://doi.org/10.1007/978-1-4842-7912-0_1

1. What Is Deep Learning?


Michael Paluszek1 , Stephanie Thomas2 and Eric Ham2
(1) Plainsboro, NJ, USA
(2) Princeton, NJ, USA

Abstract
Deep learning is a subset of machine learning which is itself a subset of
artificial intelligence and statistics. Artificial intelligence research began
shortly after World War II [35]. Early work was based on the knowledge of
the structure of the brain, propositional logic, and Turing’s theory of
computation. Warren McCulloch and Walter Pitts created a mathematical
formulation for neural networks based on threshold logic. This allowed
neural network research to split into two approaches: one centered on
biological processes in the brain and the other on the application of neural
networks to artificial intelligence. It was demonstrated that any function
could be implemented through a set of such neurons and that a neural net
could learn to recognize patterns.

1.1 Deep Learning


Deep learning is a subset of machine learning which is itself a subset of
artificial intelligence and statistics. Artificial intelligence research began
shortly after World War II [35]. Early work was based on the knowledge of
the structure of the brain, propositional logic, and Turing’s theory of
computation. Warren McCulloch and Walter Pitts created a mathematical
formulation for neural networks based on threshold logic. This allowed
neural network research to split into two approaches: one centered on
biological processes in the brain and the other on the application of neural
networks to artificial intelligence. It was demonstrated that any function
could be implemented through a set of such neurons and that a neural net
could learn to recognize patterns. In 1948, Norbert Wiener’s book
Cybernetics was published which described concepts in control,
communications, and statistical signal processing. The next major step in
neural networks was Donald Hebb’s book in 1949, The Organization of
Behavior, connecting connectivity with learning in the brain. His book
became a source of learning and adaptive systems. Marvin Minsky and Dean
Edmonds built the first neural computer at Harvard in 1950.
The first computer programs, and the vast majority now, have
knowledge built into the code by the programmer. The programmer may
make use of vast databases. For example, a model of an aircraft may use
multidimensional tables of aerodynamic coefficients. The resulting
software, therefore, knows a lot about aircraft, and running simulations of
the models may present surprises to the programmer and the users since
they may not fully understand the simulation, or may have entered
erroneous inputs. Nonetheless, the programmatic relationships between
data and algorithms are predetermined by the code.
In machine learning, the relationships between the data are formed by
the learning system. Data is input along with the results related to the data.
This is the system training. The machine learning system relates the data to
the results and comes up with rules that become part of the system. When
new data is introduced, it can come up with new results that were not part
of the training set.
Deep learning refers to neural networks with more than one layer of
neurons. The name “deep learning” implies something more profound, and
in the popular literature, it is taken to imply that the learning system is a
“deep thinker.” Figure 1.1 shows a single-layer and multi-layer network. It
turns out that multi-layer networks can learn things that single-layer
networks cannot. The elements of a network are nodes, where weighted
signals are combined and biases added. In a single layer, the inputs are
multiplied by weights and then added together at the end, after passing
through a threshold function. In a multi-layer or “deep learning” network,
the inputs are combined in the second layer before being output. There are
more weights and the added connections allow the network to learn and
solve more complex problems.
Figure 1.1 Two neural networks. The one on the right is a deep learning network.
There are many types of machine learning. Any computer algorithm that
can adapt based on inputs from the environment is a learning system. Here
is a partial list:
1. Neural nets (deep learning or otherwise)

2. Support-vector machines

3. Adaptive control

4. System identification

5. Parameter identification (may be the same as the previous one)

6. Adaptive expert systems

7. Control algorithms (a proportional integral derivative control stores


information about constant inputs in its integrator)

Some systems use a predefined algorithm and learn by fitting the


parameters of the algorithm. Others create a model entirely from data. Deep
learning systems are usually in the latter category. We’ll next give a brief
history of deep learning and then move on to two examples.

1.2 History of Deep Learning


Minsky wrote the book Perceptrons with Seymour Papert in 1969, which
was an early analysis of artificial neural networks. The book contributed to
the movement toward symbolic processing in AI. The book noted that
single-layer neurons could not implement some logical functions such as
exclusive or (XOR) and implied that multi-layer networks would have the
same issue. It was later found that three-layer networks could implement
such functions. We give the XOR solution in this book.
Multi-layer neural networks were discovered in the 1960s but not
studied until the 1980s. In the 1970s, self-organizing maps using
competitive learning were introduced [15]. A resurgence in neural
networks happened in the 1980s. Knowledge-based, or “expert,” systems
were also introduced in the 1980s. From Jackson [18]

An expert system is a computer program that represents and reasons


with knowledge of some specialized subject to solve problems or give
advice.
—Peter Jackson, Introduction to Expert Systems

Backpropagation for neural networks, a learning method using gradient


descent, was reinvented in the 1980s leading to renewed progress in this
field. Studies began with both human neural networks (i.e., the human
brain) and the creation of algorithms for effective computational neural
networks. This eventually led to deep learning networks in machine
learning applications.
Advances were made in the 1980s as AI researchers began to apply
rigorous mathematical and statistical analysis to develop algorithms.
Hidden Markov Models were applied to speech. A Hidden Markov Model is a
model with unobserved (i.e., hidden) states. Combined with massive
databases, they have resulted in vastly more robust speech recognition.
Machine translation has also improved. Data mining, the first form of
machine learning as it is known today, was developed.
In the early 1990s, Vladimir Vapnik and coworkers invented a
computationally powerful class of supervised learning networks known as
support-vector machines (SVM). These networks could solve problems of
pattern recognition, regression, and other machine learning problems.
There has been an explosion in deep learning in the past few years. New
tools have been developed that make deep learning easier to implement.
TensorFlow is available from Amazon Web Services (AWS). It makes it easy
to deploy deep learning on the cloud. It includes powerful visualization
tools. TensorFlow allows you to deploy deep learning on machines that are
only intermittently connected to the Web. IBM Watson is another. It allows
you to use TensorFlow, Keras, PyTorch, Caffe, and other frameworks. Keras
is a popular deep learning framework that can be used in Python. All of
these frameworks have allowed deep learning to be deployed just about
everywhere.
In this book, we will present MATLAB-based deep learning tools. These
powerful tools let you create deep learning systems to solve many different
problems. In our book, we will apply MATLAB deep learning to a wide range
of problems ranging from nuclear fusion to classical ballet.
Before getting into our examples, we will give some fundamentals on
neural nets. We will first give background on neurons and how an artificial
neuron represents a real neuron. We will then design a daylight detector. We
will follow this with the famous XOR problem that stopped neural net
development for some time. Finally, we will discuss the examples in this
book.

1.3 Neural Nets


Neural networks, or neural nets, are a popular way of implementing
machine “intelligence.” The idea is that they behave like the neurons in a
brain. In this section, we will explore how neural nets work, starting with
the most fundamental idea with a single neuron and working our way up to
a multi-layer neural net. Our example for this will be a pendulum. We will
show how a neural net can be used to solve the prediction problem. This is
one of the two uses of a neural net, prediction and classification. We’ll start
with a simple classification example.
Let’s first look at a single neuron with two inputs. This is shown in
Figure 1.2. This neuron has inputs x1 and x2, a bias b, weights w1 and w2, and
a single output z. The activation function σ takes the weighted input and
produces the output. In this diagram, we explicitly add icons for the
multiplication and addition steps within the neuron, but in typical neural
net diagrams such as Figure 1.1, they are omitted.
(1.1)
Let’s compare this with a real neuron as shown in Figure 1.3. A real
neuron has multiple inputs via the dendrites. Some of these branches mean
that multiple inputs can connect to the cell body through the same dendrite.
The output is via the axon. Each neuron has one output. The axon connects
to a dendrite through the synapse.
There are numerous commonly used activation functions. We show
three:
(1.2)
(1.3)

(1.4)
The exponential one is normalized and offset from zero so it ranges from
−1 to 1. The last one, which simply passes through the value of y, is called
the linear activation function. The following code in the script
OneNeuron.m computes and plots these three activation functions for an
input q. Figure 1.4 shows the three activation functions on one plot.

Figure 1.2 A two-input neuron.


Figure 1.3 A neuron connected to a second neuron. A real neuron can have 10,000 inputs!

Figure 1.4 The three activation functions from OneNeuron.


Figure 1.5 A one-input neural net. The weight w is 2 and the bias b is 3.
Activation functions that saturate, or reach a value of input after which
the output is constant or changes very slowly, model a biological neuron
that has a maximum firing rate. These particular functions also have good
numerical properties that are helpful in learning.
Let’s look at a single input neural net shown in Figure 1.5. This neuron is
(1.5)
where the weight w on the single input x is 2 and the bias b is 3. If the
activation function is linear, the neuron is just a linear function of x:
(1.6)
Neural nets do make use of linear activation functions, often in the output
layer. It is the nonlinear activation functions that give neural nets their
unique capabilities.
Let’s look at the output with the preceding activation functions plus the
threshold function from the script LinearNeuron.m. The results are in
Figure 1.6.
Figure 1.6 The “linear” neuron compared to other activation functions from
LinearNeuron.
The tanh and exp are very similar. They put bounds on the output.
Within the range − 3 ≤ x < 1, they return the function of the input. Outside
those bounds, they return the sign of the input, that is, they saturate. The
threshold function returns 0 if the value is less than 0 and 1 if it is greater
than −1.5. The threshold is saying the output is only important, thus
activated, if the input exceeds a given value. The other nonlinear activation
functions are saying that we care about the value of the linear equation only
within the bounds. The nonlinear functions (but not steps) make it easier
for the learning algorithms since the functions have derivatives. The binary
step has a discontinuity at an input of zero so that its derivative is infinite at
that point. Aside from the linear function (which is usually used on output
neurons), the neurons are just telling us that the sign of the linear equation
is all we care about. The activation function is what makes a neuron a
neuron.
We now show two brief examples of neural nets: first, a daylight
detector, and second, the exclusive or problem.

1.3.1 Daylight Detector


Problem
We want to use a simple neural net to detect daylight. This will provide an
example of using a neural net for classification.

Solution
Historically, the first neuron was the perceptron. This is a neuron with an
activation function that is a threshold. Its output is either 0 or 1. This is not
useful for many real-world problems. However, it is well suited for simple
classification problems. We will use a single perceptron in this example.

How It Works
Suppose our input is a light level measured by a photocell. If you weight the
input so that 1 is the value defining the brightness level at noon, you get a
sunny day detector.
This is shown in the script, SunnyDay.m. The solar flux is modeled
using cosine and scaled so that it is 1 at noon. Any value greater than 0 is
daylight.
Figure 1.7 shows the detector results. The set(gca,...) code sets
the x-axis ticks to end at exactly 24 hours. This is a really trivial example but
does show how classification works.
If we had multiple neurons with thresholds set to detect sunlight levels
within bands of solar flux, we would have a neural net sun clock.

1.3.2 XOR Neural Net


Problem
We want to implement the exclusive or (XOR) problem with a neural
network.

Solution
The XOR problem impeded the development of neural networks for a long
time before “deep learning” was developed. Look at Figure 1.8. The table on
the left gives all possible inputs A and B and the desired outputs C.
“Exclusive or” just means that if the inputs A and B are different, the output
C is 1. The figure shows a single-layer network and a multi-layer network, as
in Figure 1.1, but with the weights labeled as they will be in the code. You
can implement this in MATLAB easily, in just seven lines:
Figure 1.7 The daylight detector. The top plot shows the input data, and the bottom plot
shows the perceptron output detecting daylight.
Figure 1.8 Exclusive or (XOR) truth table and possible solution networks.
This type of logic was embodied in medium-scale integrated circuits in
the early days of digital systems and vacuum tube–based computers even
earlier than that. Try as you might, you cannot pick two weights and a bias
on the single-layer network to reproduce the XOR. Minsky created proof
that it was impossible.
The second neural net, the deep neural net, can reproduce the XOR. We
will implement and train this network.

How It Works
What we will do is explicitly write out the backpropagation algorithm that
trains the neural net from the four training sets given in Figure 1.8, that is,
(0,0), (1,0), (0,1), or (1,1). We’ll write it in the script XORDemo.m. The point
is to show you explicitly how backpropagation works. We will use the tanh
as the activation function in this example. The XOR function is given in
XOR.m as follows:
There are three neurons, y1, y2, and y3. The activation function for the
hidden layer with neurons y1 and y2 is the hyperbolic tangent. The
activation function for the output layer y3 is linear. In addition to the
weights depicted in Figure 1.8, each neuron also has a bias input, numbered
w7, w8, and w9:
(1.7)

(1.8)

(1.9)
Now we will derive the backpropagation routine. The hyperbolic
activation function is
(1.10)
Its derivative is
(1.11)

In this derivation, we are going to use the chain rule. Assume that F is a
function of y which is a function of x. Then

(1.12)

The error is the square of the difference between the desired output and the
output. This is known as a quadratic error. It is easy to use because the
derivative is simple and the error is always positive, making the lowest
error the one closest to zero.

(1.13)

The derivative of the error for wj for the output node

(1.14)

For the hidden nodes, it is

(1.15)

Expanding for all the weights


(1.16)

(1.17)

(1.18)

(1.19)

(1.20)

(1.21)
(1.22)
(1.23)

(1.24)
where
(1.25)

(1.26)

(1.27)

(1.28)

(1.29)

(1.30)
You can see from the derivation how this could be made recursive and
applied to any number of outputs or layers. Our weight adjustment at each
step will be

(1.31)

where η is the update gain. It should be a small number. We only have four
sets of inputs. We will apply them multiple times to get the XOR weights.
Our backpropagation trainer needs to find the nine elements of w. The
training function XORTraining.m is as follows:
The first two arguments to PlotSet are the data and are the minimum
required. The remainder is parameter pairs. The leg value has legends for
the two plots, as defined by ’plot set’. The first plot uses the first nine
data points, in this case, the weights. The second plot uses the last data
point, the mean of the error. leg is a cell array with two strings or string
arrays. The ’plot set’ is two arrays in a cell. A plot with only one value
will not generate a legend.
The demo script XORDemo.m starts with the training data, which is the
complete truth data for this simple function, and randomly generated
weights. It iterates through the inputs 25,000 times, with a training weight
of 0.001.

The results of the neural network with random weights and biases, as
expected, are not good. After training, the neural network reproduces the
XOR problem very well, as shown in the following demo output. Now, if you
change the initial weights and biases, you may find that you get bad results.
This is because the simple gradient method implemented here can fall into
local minima from which it can’t escape. This is an important point about
finding the best answer. There may be many good answers, which are
locally optimal, but there will be only one best answer. There is a vast body
of research on how to guarantee that a solution is globally optimal.
Figure 1.9 shows the weights and biases converging and also shows the
mean output error over all four inputs in the truth table going to zero. If you
try other starting weights and biases, this may not be the case. Other
solution methods, such as Genetic Algorithms [14], Electromagnetism based
[4], and Simulated Annealing [32], are less susceptible to falling into local
minima but can be slow. A good overview of optimization specifically for
machine learning is given by Bottou [5].
In the next chapter, we will use the MATLAB Deep Learning Toolbox to
solve this problem.
You can see how this compares to a set of linear equations. If we remove
the activation functions, we get
(1.32)
This reduces to just three independent coefficients:
(1.33)
Figure 1.9 Evolution of weights during exclusive or (XOR) training.

One is a constant, and the other two multiply the inputs. Writing the four
possible cases in matrix notation, we get

(1.34)

We can get close to a working XOR if we choose

(1.35)
This makes three out of four equations correct. There is no way to make all
four correct with just three coefficients. The activation functions separate
the coefficients and allow us to reproduce the XOR. This is not surprising
because the XOR is not a linear problem.

1.4 Deep Learning and Data


Deep learning systems operate on data but, unlike other systems, have
multiple layers that learn the input/output relationships. Data may be
organized in many ways. For example, we may want a deep learning system
to identify an image. A color image that is 2 pixels by 2 pixels by 3 colors
could be represented with random data using rand:

The array form implies a structure for the data. The same number of
points could be organized into a single vector using reshape:
The numbers are the same, they are just organized differently.
Convolutional neural networks, described in the next section, are often used
for image structured data. We might also have a vector:

for which we wish to learn a temporal or time sequence. In this case, if


each column is a time sample, we might have

For example, we might want to look at an ongoing sequence of samples


and determine if a set of k samples matches a predetermined sequence. For
Other documents randomly have
different content
wood of a shape that resisted being dragged through the water, and
with a string tied to it. The block of wood was called the log, and the
string had knots in it. The knots were so arranged that when one of
them ran through one’s fingers in a half-minute measured by a sand-
glass it indicated that the vessel was going at the speed of one
nautical mile in an hour. The nautical mile was taken so that sixty of
them constituted one degree, that is one three hundred and sixtieth
part of a great circle of the earth. Each nautical mile has, therefore,
6,080 feet. This is bigger than an ordinary mile on land, which has
only 5,280 feet. The knots, therefore, have to be arranged so that
when the ship is going one nautical mile—that is to say, 6,080 feet—
in an hour, a knot shall run out during the half-minute run of the
minute glass. This is attained by putting the knots 1/120 × 6,080 =
50 feet 7 inches apart. As one sailor heaved the log over he gave a
stamp on the deck and allowed the cord to run out through his
fingers. Another sailor then turned the sand-glass. When the sand
had all run out, showing that half a minute had passed, the man
who was letting the cord run through his fingers gripped it fast, and
observed how many knots or parts of knots of string had run out,
and thus was able to tell how many “knots” per half-minute the
vessel was going, that is to say, how many nautical miles an hour.
The modern plan of observing the speed of vessels is different.
Now we use a patent log, consisting of a miniature screw propeller
tied to a string and dragged through the water after the vessel. As it
is pulled through the water it revolves, and the number of
revolutions it makes shows how much water it has passed through,
and thus what distance it has gone. The number of revolutions is
measured by a counting mechanism, and can be read off when the
log is pulled in. Or sometimes the screw is attached to a stiff wire,
and the counting mechanism is kept on board the ship.
We use the expression “knots an hour” quite incorrectly. It should
be “knots per half-minute,” or “nautical miles an hour.”
It is easy to use the flow of sand for all sorts of purposes to
measure time. Thus, if sand be allowed to flow from a hopper
through a fine hole into a bucket, the bucket may be arranged so
that when a given time has elapsed, and a given weight of sand has
therefore fallen, the bucket shall tip over, and release a catch, which
shall then allow a weight to fall and any mechanical operation to be
done that is required. Thus, for example, we might put an egg in a
small holder tied to a string and lower it into a saucepan of boiling
water. The string might have a counter-weight attached to it, acting
over a pulley and thus always trying to pull it up out of the water.
But this might be prevented by a pin passing through a loop in the
string and preventing it moving. A hopper or funnel might be filled
with sand which was allowed gradually to escape into a small tip-
waggon or other similar device, so that when a given amount of
sand had entered the tip-waggon would tip over, lurch the pin out of
the loop, and thus release the weight, which in its turn would pull
the egg up out of the water in three minutes or any desired time
after it had been put in, or a hole could be made in the saucepan,
furnished with a little tap, and the water that ran out might be made
to fall into a tip-waggon and tip it over, and thus when it had run out
to put an extinguisher on to the spirit lamp that was heating the
saucepan, and at the same time make a contact and ring an electric
bell. By this means the egg would be always exactly cooked to the
right amount, would be kept warm after it was cooked, and a signal
given when it was ready.
Fig. 15.

The sketch shows such an arrangement. The saucepan is about


three inches in diameter and two inches high. When filled with water
it will hold an egg comfortably. The extinguisher E, mounted on a
hinge Q, is turned back, and the spirit lamp L is lit. As soon as the
water boils, the tap T is turned, and the water gradually trickles
away into the tip-waggon. As soon as it is full it tips over and strikes
the arm X of the extinguisher, and turns the lamp out. The little hot
water left in the saucepan will keep the egg warm for some time.
The waggon W must have a weight P at one end of it, and the
fulcrum must be nearer to that end, so that when empty it rests with
the end P down, but when full it tips over on the fulcrum, when the
waggon has received the right quantity of water. I leave to the
ingenious reader the task of working out the details of such a
machine, which, if made properly, will act very well and may be
made for a number of eggs and worked with very little trouble.
Mercury has been used also as an hour-glass. The orifice must be
exceedingly fine. Or a bubble of mercury may be put into a tube
which contains air, and made gradually as it falls to drive the air out
through a minute hole. The difficulty is to get the hole fine enough.
All that can be done is to draw out a fine tube in the
blow-lamp, break it off, and put the broken point in the
blow-lamp until it is almost completely closed up. A tube
may thus be made about twelve inches long that will
take twelve hours for a bubble of mercury to descend in
it. But the trouble of making so small a hole is
considerable.
King Alfred is said to have
used candles made of wax to
mark the time. As they blew
about with the draughts, he
Fig. 16.
put them in lanterns of horn.
They had no glass windows
in those days, but only openings closed
with heavy wooden shutters. These large
shutters were for use in fine weather.
Smaller shutters were made in them, so
as to let a little light in in rainy weather
without letting in too much wind and rain.
Rooms must then have been very
draughty, so that people required to wear
caps and gowns, and beds had thick
Fig. 17. curtains drawn round them. When glass
was first invented it was only used by
kings and princes, and glass casements were carried about with
them to be fixed into the windows of the houses to which they
came, and removed at their departure.
Oil lamps were also used to mark the time. Some of them
certainly as early as the fifteenth century were made like bird-
bottles; that is to say, they consisted of a reservoir closed at the top
with a pipe leading out of the bottom. When full, the pressure of the
external atmosphere keeps the oil in the bottle, and the oil stands in
the neck and feeds the wick. As the oil is consumed bubbles of air
pass back along the neck and rise up to the top of the oil, the level
of which gradually sinks. Of course the time shown by the lamp
varies with the rate of burning of the oil, and hence with the size of
the wick, so that the method of measuring time is a very rough one.

Appendix.
To make a sun-dial, procure a circular piece of zinc, about ⅛ inch
thick, and say twelve inches in diameter. Have a “style” or “gnomon”
cast such that the angle of its edge equals the latitude of the place
where the sun-dial is to be set up. This for London will be equal to
51° 30´´. A pattern may be made for this in wood; it should then be
cast in gun-metal, which is much better for out-of-door exposure
than brass. On a sheet of paper draw a circle A B C with centre O.
Make the angle B O D equal to the latitude of the place for London =
51° 30´´. From A draw A E parallel to O B to meet O D in E, and
with radius O E describe another circle about O. Divide the inner
circle A B C into twenty-four parts, and draw radii through them from
O to meet the larger circle. Through any divisions (say that
corresponding to two o’clock) draw lines parallel to O B, O C,
respectively to meet in a. Then the line O a is the shadow line of the
gnomon at two o’clock. The lines thus drawn on paper may be
transferred to the dial and engraved on it, or else eaten in with acid
in the manner in which etchings are done.
Fig. 18.

The centre O need not be in the centre of the zinc disc, but may
be on one side of it, so as to give better room for the hours, etc. A
motto may be etched upon the dial, such as “Horas non numero nisi
serenas,” or “Qual ’hom senza Dio, son senza sol io,” or any suitable
inscription, and the dial is ready for use. It is best put up by turning
it till the hour is shown truly as compared with a correctly timed
watch. It must be levelled with a spirit level. It must be remembered
that the sun does not move quite uniformly in his yearly path among
the fixed stars. This is because he moves not in a circle, but in an
ellipse of which the earth is in one of the foci. Hence the hours
shown on the dial are slightly irregular, the sun being sometimes in
advance of the clock, sometimes behind it. The difference is never
more than a quarter of an hour. There is no difference at
midsummer and midwinter.
Fig. 19.

Civil time is solar time averaged, so as to make the hours and


days all equal. The difference between civil time and apparent solar
time is called the equation of time, and is the amount by which the
sun-dial is in advance of or in retard of the clock. In setting a dial by
means of a watch, of course allowance must be made for the
equation of time.
CHAPTER II.
In the last chapter a short description has been given of the ideas
of the ancients as to the nature of the earth and heavens. Before we
pass to the changes introduced by modern science, it will be well to
devote a short space to an examination of ancient scientific ideas.
All science is really based upon a combination of two methods,
called respectively inductive and deductive reasoning. The first of
these consists in gathering together the results of observation and
experiment, and, having put them all together, in the formulation of
universal laws. Having, for example, long observed that all heavy
things tended to go towards the centre of the earth, we might
conclude that, since the stars remain up in the sky, they can have no
weight. The conclusion would be wrong in this case, not because the
method is wrong, but because it is wrongly applied. It is true that all
heavy things tend to go to the centre of the earth, but if they are
being whirled round like a stone in a sling the centrifugal force will
counteract this tendency. The first part of the reasoning would be
inductive, the second deductive. All this reasoning consists,
therefore, in forming as complete an idea as possible respecting the
nature of a thing, and then concluding from that idea what the thing
will do or what its other properties will be. In fact, you form correct
ideas, or “concepts,” as they are called, and reason from them.
But the danger arises when you begin to reason before you are
sure of the nature of your concepts, and this has been the great
source of error, and it was this error that all men of science so
commonly fell into all through ancient and modern times up to the
seventeenth century.
Of course, if it were possible by mere observation to derive a
complete knowledge of any objects, it would be the simplest
method. All that would be necessary to do would be to reason
correctly from this knowledge. Unfortunately, however, it is not
possible to obtain knowledge of this kind in any branch of science.
The ancient method resembled the action of one who should
contend that by observing and talking to a man you could acquire
such a knowledge of his character as would infallibly enable you to
understand and predict all his actions, and to take little trouble to
see whether what he did verified your predictions.
The only difference between the old methods and the new is that
in modern times men have learned to give far more care to the
formation of correct ideas to start with, are much more cautious in
arguing from them, and keep testing them again and again on every
possible opportunity.
The constant insistence on the formation of clear ideas and the
practice of, as Lord Bacon called it, “putting nature to the torture,” is
the main cause of the advance of physical science in modern times,
and the want of application of these principles explains why so little
progress is being made in the so-called “humanitarian” studies, such
as philosophy, ethics, and politics.
The works of Aristotle are full of the fallacious method of the old
system. In his work on the heavens he repeatedly argues that the
heavenly bodies must move in circles, because the circle is the most
perfect figure. He affects a perplexity as to how a circle can at the
same time be convex and also its opposite, concave, and repeatedly
entangles his readers in similar mere word confusion.
Regarded as a man of science, he must be placed, I think, in spite
of his great genius, below Archimedes, Hipparchus, and several
other ancient astronomers and physicists.
His errors lived after him and dominated the thought of the
middle ages, and for a long time delayed the progress of science.
The other great writer on astronomy of ancient times was
Ptolemy of Alexandria.
His work was called the “Great Collection,” and was what we
should now term a compendium of astronomy. Although based on a
fundamental error, it is a thoroughly scientific work. There is none of
the false philosophy in it that so much disfigures the work of
Aristotle. The reasons for believing that the earth is at rest are
interesting. Ptolemy argues that if the earth were moving round on
its axis once in twenty-four hours a bird that flew up from it would
be left behind. At first sight this argument seems very convincing,
for it appears impossible to conceive a body spinning at the rate at
which the earth is alleged to move, and yet not leaving behind any
bodies that become detached from it.
On the other hand, the system which taught that the sun and
planets moved round the earth, and which had been adopted largely
on account of its supposed simplicity, proved, on further
examination, to be exceedingly complicated. Each planet, instead of
moving simply and uniformly round the earth in a circle, had to be
supposed to move uniformly in a circle round another point that
moved round the earth in a circle. This secondary circle, in which the
planet moved, was called an epicycle. And even this more
complicated view failed to explain the facts.
A system which, like that of Aristotle and Ptolemy, was based on
deductions from concepts, and which consisted rather of drawing
conclusions than of examining premises, was very well adapted to
mediæval thought, and formed the foundation of astronomy and
geography as taught by the schoolmen.
The poem of Dante accurately represents the best scientific
knowledge of his day. According to his views, the centre of the earth
was a fixed point, such that all things of a heavy nature tended
towards it. Thus the earth and water collected round it in the form of
a ball. He had no idea of the attraction of one particle of matter for
another particle. The only conception he had of gravity was of a
force drawing all heavy things to a certain point, which thus became
the point round which the world was formed.
The habitable part of the earth was an island,
with Jerusalem in the middle of it J. Round
this island was an ocean O. Under the island,
in the form of a hollow cone, was hell, with its
seven circles of torment, each circle becoming
smaller and smaller, till it got down into the
centre C. Heaven was at the opposite side H
of the earth to Jerusalem, and was beyond
the circles of the planets, in the primum
mobile. When Lucifer was expelled from
heaven after his rebellion against God, having
Fig. 20. become of a nature to be attracted to the
centre of the earth, and no longer drawn
heavenwards, he fell from heaven, and impinged upon the earth just
at the antipodes of Jerusalem, with such violence that he plunged
right through it to the centre, throwing up behind him a hill. On the
summit of this hill was the Garden of Eden, where our first parents
lived, and down the sides of the hill was a spiral winding way which
constituted purgatory. Dante, having descended into hell, and
passed the centre, found his head immediately turned round so as to
point the other way up, and, having ascended a tortuous path, came
out upon the hill of Purgatory. Having seen this, he was conducted
to the various spheres of the planets, and in each sphere he became
put into spiritual communion with the spirits of the blessed who
were of the character represented by that sphere, and he supposes
that he was thus allowed to proceed from sphere to sphere until he
was permitted to come into the presence of the Almighty, who in the
primum mobile presided over the celestial hosts.
The astronomical descriptions given by Dante of the rising and
setting of the sun and moon and planets are quite accurate,
according to the system of the world as conceived by him, and show
not only that he was a competent astronomer, but that he probably
possessed an astrolabe and some tables of the motions of the
heavenly bodies.
Our own poet Chaucer may also be credited with accurate
knowledge of the astronomy of his day. His poems often mention the
constellations, and one of them is devoted to a description of the
astrolabe, an instrument somewhat like the celestial globe which
used to be employed in schools.
But with the revival of learning in Europe and the rise of freedom
of thought, the old theories were questioned in more than one
quarter.
It occurred to Copernicus, an ecclesiastic who lived in the
sixteenth century, to re-examine the theory that had been started in
ancient times, and to consider what explanation of the appearance
of the heavenly bodies could be given on the hypothesis put forward
by Pythagoras, that the earth moved round on its own axis, and also
round the sun.
It may appear rather curious that two theories so different, one
that the sun goes round the earth and the other that the earth goes
round the sun, should each be capable of explaining the observed
appearances of those bodies. But it must be remembered that
motion is relative. If in a waltz the gentleman goes round the lady,
the lady also goes round the gentleman. If you take away the room
in which they are turning, and consider them as spinning round like
two insects in space, who is to say which of them is at rest and
which in motion? For motion is relative. I can consider motion in a
train from London to York. As I leave London I get nearer to York,
and I move with respect to London and York. But if both London and
York were annihilated how should I know that I was in motion at all?
Or, again, if, while I was at rest in the train at a station on the way,
instead of the train moving the whole earth began to move in a
southward direction, and the train in some way were left stationary,
then, though the earth was moving, and the train was at rest, yet,
so far as I was concerned, the train would appear to have started
again on its journey to York, at which place it would appear to arrive
in due time. The trees and hedges would fly by at the proper rate,
and who was to say whether the train was in motion or the earth?
The theory of Copernicus, however, remained but a theory. It was
opposed to the evidence of the senses, which certainly leads us to
think that the earth is at rest, and it was opposed also to the ideas
of some among the theologians who thought that the Bible taught
us that the earth was so fast that it could not be moved. Therefore
the theory found but little favour. It was in fact necessary before the
question could be properly considered on its merits that more should
be known about the laws of motion, and this was the principal work
of Galileo.
The merit of Galileo is not only to have placed on a firm basis the
study of mechanics, but to have set himself definitely and
consciously to reverse the ancient methods of learning.
He discarded authority, basing all knowledge upon reason, and
protested against the theory that the study of words could be any
substitute for the study of things.
Alluding to the mathematicians of his day, “This sort of men,” says
Galileo in a letter to the astronomer Kepler, “fancied that philosophy
was to be studied like the ‘Æneid’ or ‘Odyssey,’ and that the true
reading of nature was to be detected by the collating of texts.” And
most of his life was spent in fighting against preconceived ideas. It
was maintained that there could only be seven planets, because God
had ordered all things in nature by sevens (“Dianoia Astronomica,”
1610); and even the discoveries of the spots on the sun and the
mountains in the moon were discredited on the ground that celestial
bodies could have no blemishes. “How great and common an error,”
writes Galileo, “appears to me the mistake of those who persist in
making their knowledge and apprehension the measure of the
knowledge and apprehension of God, as if that alone were perfect
which they understand to be so. But ... nature has other scales of
perfection, which we, being unable to comprehend, class among
imperfections.
“If one of our most celebrated architects had had to distribute the
vast multitude of fixed stars over the great vault of heaven, I believe
he would have disposed them with beautiful arrangements of
squares, hexagons, and octagons; he would have dispersed the
larger ones among the middle-sized or lesser, so as to correspond
exactly with each other; and then he would think he had contrived
admirable proportions; but God, on the contrary, has shaken them
out from His hand as if by chance, and we, forsooth, must think that
He has scattered them up yonder without any regularity, symmetry,
or elegance.”
In one of Galileo’s “Dialogues” Simplicio says, “That the cause
that the parts of the earth move downwards is notorious, and
everyone knows that it is gravity.” Salviati replies, “You are out,
Master Simplicio: you should say that everyone knows that it is
called gravity; I do not ask you for the name, but for the nature, of
the thing of which nature neither you nor I know anything.”
Too often are we still inclined to put the name for the thing, and
to think when we use big words such as art, empire, liberty, and the
rights of man, that we explain matters instead of obscuring them.
Not one man in a thousand who uses them knows what he means;
no two men agree as to their signification.
The relativity of motion mentioned above was very elegantly
illustrated by Galileo. He called attention to the fact that if an artist
were making a drawing with a pen while in a ship that was in rapid
passage through the water, the true line drawn by the pen with
regard to the surface of the earth would be a long straight line with
some small dents or variations in it. Yet the very same line traced by
the pen upon a paper carried along in the ship made up a drawing.
Whether you saw a long uneven line or a drawing in the path that
the pen had traced depended altogether on the point of view with
which you regarded its motion.
Fig. 21.

But the first great step in science which Galileo made when quite
a young professor at Pisa was the refutation of Aristotle’s opinion
that heavy bodies fell to the earth faster than light ones. In the
presence of a number of professors he dropped two balls, a large
and a small one, from the parapet of the leaning tower of Pisa. They
fell to the ground almost exactly in the same time. This experiment
is quite an easy one to try. One of the simplest ways is as follows:
Into any beam (the lintel of a door will do), and about four inches
apart, drive three smooth pins so as to project each about a quarter
of an inch; they must not have any heads. Take two unequal
weights, say of 1 lb. and 3 lbs. Anything will do, say a boot for one
and pocket-knife for the other; fasten loops of fine string to them,
put the loops over the centre peg of the three, and pass the strings
one over each of the side pegs. Now of course if you hitch the loops
off the centre peg P the objects will be released together. This can
be done by making a loop at the end of another piece of string, A,
and putting it on to the centre peg behind the other loops. If the
string be pulled of course the loop on it pulls the other two loops off
the central peg, and allows the boot and the knife to drop. The boot
and the knife should be hung so as to be at the same height. They
will then fall to the ground together. The same experiment can be
tried by dropping two objects from an upper window, holding one in
each hand, and taking care to let them go together.
This result is very puzzling; one does not
understand it. It appears as though two
unequal forces produced the same effect. It
is as though a strong horse could run no
faster than a weaker one.
The professors were so irritated at the
result of this experiment, and indeed at the
general character of young Professor
Fig. 22. Galileo’s attacks on the time-honoured ideas
of Aristotle, that they never rested till they
worried him out of his very poorly paid chair
at Pisa. He then took a professorship at Padua.
Let us now examine this result and see why it is that the ideas we
should at first naturally form are wrong, and that the heavy body will
fall in exactly the same time as the light one.
We may reason the matter in this way. The heavy body has more
force pulling on it; that is true, but then, on the other hand there is
more matter which has got to be moved. If a crowd of persons are
rushing out of a building, the total force of the crowd will be greater
than the force of one man, but the speed at which they can get out
will not be greater than the speed of one man; in fact, each man in
the crowd has only force enough to move his own mass. And so it is
with the weights: each part of the body is occupied in moving itself.
If you add more to the body you only add another part which has
itself to move. A hundred men by taking hands cannot run faster
than one man.
But, you will say, cannot a man run faster than a child? Yes,
because his impelling power is greater in proportion to his weight
than that of a child.
If it were the fact that the attraction of gravity due to the earth
acted on some bodies with forces greater in proportion to their
masses than the forces that acted on other bodies, then it is true
that those different bodies would fall in unequal time. But it is an
experimental fact that the attractive force of gravity is always exactly
proportional to the mass of a body, and the resistance to motion is
also proportional to mass, hence the force with which a body is
moved by the earth’s attraction is always proportional to the
difficulty of moving the body. This would not be the case with other
methods of setting a body in motion. If I kick a small ball with all my
might, I shall send it further than a kick of equal strength would
send a heavier ball. Why? Because the impulse is the same in each
case, but the masses are different. But if those balls are pulled by
gravity, then, by the very nature of the earth’s attraction (the reason
of which we cannot explain), the small ball receives a little pull, and
the big ball receives a big pull, the earth exactly apportioning its pull
in each case to the mass of the body on which it has to act. It is to
this fact, that the earth pulls bodies with a strength always in each
case exactly proportional to their masses, that is due the result that
they fall in equal times, each body having a pull given to it
proportional to its needs.
The error of the view of Aristotle was not only demonstrated by
Galileo by experiment, but was also demonstrated by argument. In
this argument Galileo imitated the abstract methods of the
Aristotelians, and turned those methods against themselves. For he
said, “You” (the Aristotelians) “say that a lighter body will fall more
slowly than a heavy one. Well, then, if you bind a light body on to a
heavy one by means of a string, and let them fall together, the light
body ought to hang behind, and impede the heavy body, and thus
the two bodies together ought to fall more slowly than the heavy
body alone; this follows from your view: but see the contradiction.
For the two bodies tied together constitute a heavier body than the
heavy body alone, and thus, on your own theory, ought to fall more
quickly than the heavy body alone. Your theory, therefore,
contradicts itself.”
The truth is that each body is occupied in moving itself without
troubling about moving its neighbour, so that if you put any number
of marbles into a bag and let them drop they all go down
individually, as it were, and all in the time which a single marble
would take to fall. For any other result would be a contradiction. If
you cut a piece of bread in two, and put the two halves together,
and tie them together with a thread, will the mere fact that they are
two pieces make each of them fall more slowly than if they were
one? Yet that is what you would be bound to assert on the
Aristotelian theory. Hold an egg in your open hand and jump down
from a chair. The egg is not left behind; it falls with you. Yet you are
the heavier of the two, and on Aristotelian principles you ought to
leave the egg behind you. It is true that when you jump down a
bank your straw hat will often come off, but that is because the air
offers more resistance to it than the air offers to your body. It is the
downward rush through the air that causes your hat to be left
behind, just as wind will blow your hat off without blowing you away.
For since motion is relative, it is all one whether you jump down
through the air, or the air rushes past you, as in a wind. If there
were no air, the hat would fall as fast as your body.
This is easy to see if we have an airpump and are thus enabled to
pump out almost all the air from a glass vessel. In that vessel so
exhausted, a feather and a coin will fall in equal times. If we have
not an airpump, we can try the experiment in a more simple way.
For let us put a feather into a metal egg-cup and drop them
together. The cup will keep the air from the feather, and the feather
will not come out of the cup. Both will fall to the ground together.
But if the lighter body fall more slowly, the feather ought to be left
behind. If, however, you tie some strings across a napkin ring so as
to make a sort of rough sieve, and put a feather in it, and then drop
the ring, then as the ring falls the air can get through the bottom of
the ring and act on the feather, which will be left floating as the ring
falls.
Let us now go on to examine the second fallacy that was derived
from the Aristotelians, and that so long impeded the advance of
science, namely, that the earth must be at rest.
The principal reason given for this was that if bodies were thrown
up from the earth they ought, if the earth were in motion, to remain
behind. Now, if this were so, then it would follow that if a person in
a train which was moving rapidly threw a ball vertically, that is
perpendicularly, up into the air, the ball, instead of coming back into
his hand, ought to hit the side of the carriage behind him. The next
time any of my readers travel by train he can easily satisfy himself
that this is not so. But there are other ways of proving it. For
instance, if a little waggon running on rails has a spring gun fixed in
it in a perpendicular position, so arranged that when the waggon
comes to a particular point on the rails a catch releases the trigger
and shoots a ball perpendicularly upwards, it will be found that the
ball, instead of going upwards in a vertical line, is carried along over
the waggon, and the ball as it ascends and descends keeps always
above the waggon, just as a hawk might hover over a running
mouse, and finally falls not behind the waggon, but into it.
So, again, if an article is dropped out of the window of a train, it
will not simply be left behind as it falls, but while it falls it will also
partake of the motion of the train, and touch the ground, not behind
the point from which it was dropped, but just underneath it.
The reason is, that when the ball is dropped or thrown it acquires
not only the motion given to it by the throw, or by gravity, but it
takes also the motion of the train from which it is thrown. If a ball is
thrown from the hand, it derives its motion from the motion of the
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookmasss.com

You might also like