Lecture 1
Lecture 1
% 30 Midterm Exam
% 30 Homework
% 40 Final Exam
Goodfellow I., Bengio Y., Courvillle A., Deep Learning, MIT Press,
2016.
Contents of the Course
1.Introduction
2.Learning Processes
3.Optimization Techniques
4.Single-Layer Perceptrons
5.Multi-Layer Perceptrons
6.Backpropagation Algorithm
7.Neural Network based Control
8.Fuzzy Sets
9.Fuzzy Rules and Fuzzy Reasoning
10.Fuzzy Inference Systems
11.Fuzzy Logic Based Control
12.Genetic Algorithms
13.Other Derivative-Free Global Optimization Methods
14.Genetic Algorithms in Control
Other Topics
Topics under the scope of computational intelligence that are not
in the scope of this course:
et u t
Reference
+
S
Output
input
CONTROLLER SYSTEM
r (t ) c(t )
-
FEEDBACK
MEASUREMENT
SYSTEM
Graceful degradation
parameters
Soft Computing
Work on artificial neural networks has been motivated right from its inception
by the recognition that the human brain computes in an entirely different way
from the conventional digital computer.
Adaptivity
ANNs have a built-in capability to adapt their synaptic weights to changes
in the surrounding environment. An ANN trained to operate in a
specific environment can be easily retrained to deal with minor changes
in the operating environmental conditions. When operating in a
nonstationary (one where statistics change with time) environment
an ANN can be designed to change its synaptic weights in real time.
Fault Tolerance
An ANN is inherently fault tolerant,
performance
The pyramidal cell can receive 10000 or more synaptic contacts and it can project onto
thousands of target cells.
Biological Neuron Artificial Neuron
THE PERCEPTRON
u k wkj x j
m
j 1
v k u k + bk
y k v k
x0
x
1
x x2
...
x n
vk wki xi + w0 w.x
n
i 1
1 v k 0
y k (v k )
0 v k 0
Activation functions
Historical Notes
First studies on neural networks began with the pioneering work of
McCulloch and Pitts (1943)
McCulloch was a neuroanatomist and Pitts was a mathematician.
They combined the studies of neurophysiology and mathematical logic
to describe the logical calculus of neural networks in their classical
paper of 1943.
In 1958 Rosenblatt introduced the perceptron.
In 1960, Widrow and Hoff developed the least means square (LMS)
method to train the perceptron.
In 1969, Minsky and Papert wrote a book to demonstrate mathematically
that there are fundamental limits on what single-layer perceptrons
can compute.
1970s were pessimistic. In addition to the negative effect of Minsky and
Papert’s book, there were technological limitations and financial problems.
In 1980s, there was a resurgence of interest in neural networks.
In 1982, Hopfield used the idea of an energy function and worked
on recurrent neural networks.
In 1982, Kohonen published a paper on self-organising maps.
In 1983, Barto, Sutton and Anderson published a paper on
reinforcement learning.
In 1986, Rumelhart, Hinton and Williams developed the famous
backpropagation algorithm.
In early1990s, Vapnik and his coworkers invented
support vector machines for solving pattern recognition, regression
and density estimation problems.
NOW, there is a resurgence of interest in neural networks:
Deep neural networks that operate on big data without preprocessing.
NETWORK ARCHITECTURES
3. Recurrent Networks
Single-Layer Feedforward Networks
Multi-Layer Feedforward Networks
Hopfield network
Architectures
How the neurons are connected to each other
The number of layers
The number of neurons in each layer
Feedforward or recurrent architectures
Learning Mechanism
How the neuron weights are updated during teaching
Supervised Learning - Unsupervised learning
Choosing among various learning algorithms
Determine training data and testing data
SOME APPLICATIONS OF
NEURAL NETWORKS
Function Approximation
Parameter
update
LEARNING signal
ALGORITHM
Output
S
xt
Reference
+
input
NEURAL
SYSTEM
x d t et u t
CONTROLLER
-
Nonlinear Prediction