0% found this document useful (0 votes)
3 views50 pages

Lecture 1

The document outlines a course on Intelligent Control Systems, detailing grading policies, textbooks, and course contents including neural networks and fuzzy logic. It emphasizes the need for intelligent controllers due to limitations in conventional control techniques and highlights the properties and applications of soft computing. The course will cover various neural network architectures and their applications in control systems and pattern classification.

Uploaded by

jacobhunter1717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views50 pages

Lecture 1

The document outlines a course on Intelligent Control Systems, detailing grading policies, textbooks, and course contents including neural networks and fuzzy logic. It emphasizes the need for intelligent controllers due to limitations in conventional control techniques and highlights the properties and applications of soft computing. The course will cover various neural network architectures and their applications in control systems and pattern classification.

Uploaded by

jacobhunter1717
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Intelligent Control Systems

Gülay Öke Günel


Room:4220
E-mail: [email protected]
Grading Policy

% 30 Midterm Exam
% 30 Homework
% 40 Final Exam

There will be no VF conditions.


All students can take the final exam.
Textbook
 Haykin, S., Neural Networks and Learning Machines, Third Edition,
Pearson Prentice Hall, New Jersey, 2009
 Jang, J.-S. R., C.-T. Sun, E. Mizutani, Neuro-Fuzzy and Soft Computing,
PTR Prentice-Hall, 1997.
Other references

Alpaydın E., Introduction to Machine Learning, The MIT Ptress, 2010.


Engelbrecht A.P., Computational Intelligence: An Introduction,
John Wiley and Sons, 2007.
 Nguyen H.T., Prasad N.R., Walker C.L., Walker E.A., A First Course
in Fuzzy and Neural Control, CRC Press, 2003.
 Efe M.Ö., Kaynak O., Yapay Sinir Ağları ve Uygulamaları,
Boğaziçi Üniversitesi, 2000.
 Elmas Ç., Bulanık Mantık Denetleyiciler, Seçkin Yayıncılık, 2003.
Further Reading

 Goodfellow I., Bengio Y., Courvillle A., Deep Learning, MIT Press,
2016.
Contents of the Course

1.Introduction
2.Learning Processes
3.Optimization Techniques
4.Single-Layer Perceptrons
5.Multi-Layer Perceptrons
6.Backpropagation Algorithm
7.Neural Network based Control
8.Fuzzy Sets
9.Fuzzy Rules and Fuzzy Reasoning
10.Fuzzy Inference Systems
11.Fuzzy Logic Based Control
12.Genetic Algorithms
13.Other Derivative-Free Global Optimization Methods
14.Genetic Algorithms in Control
Other Topics
Topics under the scope of computational intelligence that are not
in the scope of this course:

 Adaptive Neuro-Fuzzy Inference System (ANFIS)


 Radial Basis Functions
 Hopfield Models
 Support Vector Machines
 Kohonen Self-Organizing Maps
 Learning Vector Quantization
 Principle Component Analysis
 Committee Machines
 Regression Trees
 Data Clustering Algorithms
 Information Theoretic Models
 Artificial Immune Systems
Recurrent Neural Networks
 Deep Learning
……..
A Control System

et  u t 
Reference
+
S
Output
input
CONTROLLER SYSTEM
r (t ) c(t )
-

FEEDBACK
MEASUREMENT
SYSTEM

The block diagram of a control system


Attribute Desired Purpose Specifications
Value
Stability High Response should not Percentage overshoot,
grow without limit; it pole locations, time
should decay to the constants, damping ratios,
desired value phase and gain margins
Speed of Response Fast System should respond Rise time, peak time, delay
quickly to the inputs time, natural frequencies.
Steady-state error Low Error between the output Error tolerance for a step
of the system and the input.
desired output after all
the transients die, should
be low.
Robustness High Accurate response under Input noise tolerance,
uncertain conditions and measurement error
under parameter tolerance, model error
variations. tolerance
Conventional Control Techniques

 Proportional-Integral-Derivative (PID) Control


 Optimal Control
Linear Quadratic Gaussian (LQG) control
 Adaptive Control
Model Reference Adaptive Control (MRAC)
 Nonlinear Feedback Control
 Sliding Mode Control
 Robust Control
 Stochastic Control
…………..
…………..
Need for an intelligent controller is due to:

Inability to model the system


Inability to build a controller for the system
Performance problems
Incomplete/unreliable information
Significant research has been carried out in understanding
and emulating human intelligence while, in parallel,
developing inference engines for processing human knowlegde.

The resultant techniques incorporate notions gathered from


a wide range of specialization such as neurology, psychology,
operations research, conventional control theory, computer
science and communications theory.

Many of the results of this effort are used in Control Engineering


and their combination led to new techniques for dealing with
vagueness and uncertainty.
This is the domain of Soft Computing (another related term is
computational intelligence) which focuses on stochastic, vague, empirical and
associative situations, typical of the industrial and manufacturing environment.

Intelligent Controllers (also referred as soft controllers) are derivatives of


Soft Computing, characterized by their ability to establish the functional
relationship between their inputs and outputs from empirical data.

This is an important difference between intelligent controllers and


conventional controllers, which are based on explicit functional relations.

The functional relationship between the inputs and outputs of an


intelligent controller can be established either
Directly – from a specified training set (ANN)
Indirectly – by means of a relational algorithm or knowlegde base.
(FUZZY)
Soft computing techniques possess the following properties:

 They demonstrate adaptability


 They are able to learn.
 They are able to reason and decide.
 They have an inherent tolerance to errors.
 They have the graceful degradation property.
 They possess speeds comparable to those of humans.
performance

Graceful degradation

parameters
Soft Computing

Soft Computing is an emerging approach to computing which parallels


the remarkable ability of the human mind to reason and learn in
an environment of uncertainty and imprecision.
(Lotfi A. Zadeh, 1992)

Lotfi Zadeh (1921-2017)


Founder of fuzzy logic
 Intelligent controllers and soft computing techniques have been
successfully implemented for nearly four decades now, for the solution
of numeruous control problems from vastly different areas.
 Intelligent controllers have been designed to work alone or
in combination with conventional controllers.
 It is possible to employ a soft computing technique as part of
a conventional controller.
For example, you can design a PID controller whose KP , KI and KD gains
are tuned by a neural network, a fuzzy controller or a genetic algorithm.
AN INTRODUCTION TO NEURAL NETWORKS
Neural Networks (neurocomputers, connectionist networks,
parallel distributed processors)

Work on artificial neural networks has been motivated right from its inception
by the recognition that the human brain computes in an entirely different way
from the conventional digital computer.

The brain is a highly complex, nonlinear, and parallel computer (information-


processing system). It has the capability to organize its structural constituents,
known as neurons, so as to perform certain computations many times faster than
the fastest digital computer in existence today.
(Haykin, 1999)
A neural network is a massively parallel distributed processor
made up of simple processing units, which has a natural propensity
for storing experiental knowledge and making it available for use.
It resembles the brain in two respects:
1. Knowledge is acquired by the network from its environment
through a learning process.
2. Interneuron connection strengths, known as synaptic weights
are used to store the acquired knowlegde.
Main Features of Neural Networks
Nonlinearity
An artificial neuron can be linear or nonlinear. A NN made up of
an interconnection of nonlinear neurons is itself nonlinear and
this nonlinearity is distributed throughout the network.

Adaptivity
ANNs have a built-in capability to adapt their synaptic weights to changes
in the surrounding environment. An ANN trained to operate in a
specific environment can be easily retrained to deal with minor changes
in the operating environmental conditions. When operating in a
nonstationary (one where statistics change with time) environment
an ANN can be designed to change its synaptic weights in real time.
Fault Tolerance
An ANN is inherently fault tolerant,
performance

or capable of robust computation,


in the sense that its performance
Graceful degradation

degrades gracefully under adverse operating


conditions. For example, if a neuron or
its connecting links are damaged,
performance of the ANN is not seriously parameters

degraded, due to the


distributed nature of information storage in the network.
Generalization
An ANN can produce reasonable outputs for inputs not encountered
during training (learning).
VLSI Implementability
The massively parallel nature of a neural network makes it well suited for
implementation using VLSI (very-large-scale-integrated) technology.
HUMAN BRAIN

Block diagram representation of nervous system


Main parts of the human brain
A real neural network (Taken by an electron microscope)
http://www.neuralstainkit.com/images/in-p-1401b-NDT104.jpg
 The structural constituents of the brain are the
nerve cells, known as neurons.

 There are approximately 10 billion neurons in the human cortex


and 60 trillion synapses or connections

 Neurons are slower than silicon logic gates.


Events in a silicon chip happen in the nanosecond (10-9 s) range.
Neural events happen in the millisecond (10-3 s) range.

 The brain makes up for the relatively slow rate of operation


of a neuron by having an enormous number of neurons
with massive interconnections between them.
Basic Neuron Types
Synapses are elementary structural
and functional units that mediate the
interaction between neurons.

The most common kind of a synapse is a


chemical synapse.

A presynaptic process liberates a transmitter


substance that diffuses across the synaptic
junction between neurons and then acts
on a postsynaptic process.

A synapse converts a presynaptic


electrical signal into a chemical signal
and then back into a
postsynaptic electrical signal.
The pyramidal cell

The pyramidal cell can receive 10000 or more synaptic contacts and it can project onto
thousands of target cells.
Biological Neuron Artificial Neuron
THE PERCEPTRON

u k   wkj x j
m

j 1

v k  u k + bk

y k   v k 
 x0 
x 
  
1

x   x2 
 
 ... 
 x n 

w  wk 0 wk1 wk 2 ..... wkn 


vk   wki xi + w0  w.x
n
 
i 1

1 v k  0
y k   (v k )  
0 v k  0
Activation functions
Historical Notes
First studies on neural networks began with the pioneering work of
McCulloch and Pitts (1943)
McCulloch was a neuroanatomist and Pitts was a mathematician.
They combined the studies of neurophysiology and mathematical logic
to describe the logical calculus of neural networks in their classical
paper of 1943.
In 1958 Rosenblatt introduced the perceptron.
In 1960, Widrow and Hoff developed the least means square (LMS)
method to train the perceptron.
In 1969, Minsky and Papert wrote a book to demonstrate mathematically
that there are fundamental limits on what single-layer perceptrons
can compute.
1970s were pessimistic. In addition to the negative effect of Minsky and
Papert’s book, there were technological limitations and financial problems.
In 1980s, there was a resurgence of interest in neural networks.
In 1982, Hopfield used the idea of an energy function and worked
on recurrent neural networks.
In 1982, Kohonen published a paper on self-organising maps.
In 1983, Barto, Sutton and Anderson published a paper on
reinforcement learning.
In 1986, Rumelhart, Hinton and Williams developed the famous
backpropagation algorithm.
In early1990s, Vapnik and his coworkers invented
support vector machines for solving pattern recognition, regression
and density estimation problems.
NOW, there is a resurgence of interest in neural networks:
Deep neural networks that operate on big data without preprocessing.
NETWORK ARCHITECTURES

1. Single-Layer Feedforward Networks

2. Multi-Layer Feedforward Networks

3. Recurrent Networks
Single-Layer Feedforward Networks
Multi-Layer Feedforward Networks

This is a 10-4-2 network.

By adding hidden layers, the network


is able to extract higher-order statistics.

The network acquires a global


perspective despite its
local connections due to the
extra set of synaptic connections and
neural interactions.
Recurrent Neural Networks

A recurrent neural network is different


from a feedforward neural network
in that it contains at least one
feedback loop.

Hopfield network

Recurrent neural network with no


self-feedback loops
and no hidden neurons
Recurrent neural network with hidden neurons
Artificial Neural Networks: Design Options

Architectures
How the neurons are connected to each other
The number of layers
The number of neurons in each layer
Feedforward or recurrent architectures

Properties of the neurons


The relation between the input and the output of the neuron
The properties of the nonlinear function

Learning Mechanism
How the neuron weights are updated during teaching
Supervised Learning - Unsupervised learning
Choosing among various learning algorithms
Determine training data and testing data
SOME APPLICATIONS OF
NEURAL NETWORKS
Function Approximation

Consider a nonlinear input-output mapping described by:


d  f x 

The vector valued function f(.) is unknown.


We are given the set of labeled examples:
  x i , d i iN1

The requirement is to design a neural network whose input-output


Mapping described by F(.) is close to f(.) in the Euclidean sense.

Fx  - f x    for all x

(NN as a universal approximator – Stone-Weierstrass Theorem)


System Identification
Control

Parameter
update
LEARNING signal
ALGORITHM

Output

S
xt 
Reference

+
input
NEURAL
SYSTEM
x d t  et  u t 
CONTROLLER

-
Nonlinear Prediction

The requirement in the prediction problem is to predict the present value


of x(n) of a process, given past values of the process that are
uniformly spaced in time as shown by x(n-T), x(n-2T),………x(n-mT),
where T is the sampling period and m is the prediction order.
Pattern Classification

Pattern classification is formally defined as the process whereby a received pattern/signal


is assigned to one of a prescribed number of classes (categories).
Pattern recognition performed by a neural network is statistical in nature,
with the patterns being represented by points in a multidimensional decision space.
The decision space is divided into regions, each one of which is associated with a class.
Signal-flow graph of the perceptron

A pair of linearly separable patterns


A pair of linearly separable patterns A pair of non-linearly separable patterns
This was a brief introduction to intelligent controllers
and neural networks.

Next week, we’ll start studying neural networks in more detail.

And our first topic will be learning processes …

You might also like