0% found this document useful (0 votes)
8 views30 pages

PART I Chapter2 Neural Network

Uploaded by

Tân Hoàng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views30 pages

PART I Chapter2 Neural Network

Uploaded by

Tân Hoàng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

INTELLIGENT CONTROL SYSTEM

PART 1:
Neural
Network
Instructor: Dr. Dang Xuan Ba
Email: [email protected]

Department of Automatic Control


Chapter 1: Introduction of Neural
network

Chapter 2: Single-layer
Feedforward Neural Network

Chapter 3: Multi-layer Neural


Content Network

Chapter 4: RBF neural Network

Chapter 5: Several Applications.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 2


CHAPTER 2 – SINGLE-LAYER
FEEDFORWARD NEURAL NETWORK
(Perceptron)

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 3


Chapter 2: Single-layer Feedforward Neural Network
1. Structure:
x1 w11 y1
w12
x2 y2
x3
.. .. ..
. . .
wnm yn
xm

Input signal:

Output signal:

Weight matrix:

Its applications include classification, approximation, recognition….

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 4


Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)

Example: Classification of two distinguish sets

What will we do?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 5


Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)
x1 w1
x2 y
w2  w x −
i i

xn wn 

Integration function: linear, quadratic functions.

Activation function: threshold functions

It’s good for explicit classification applications.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 6


Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)
x1 w1
x2 y
w2  w x −
i i

xn wn 

Learning algorithm (supervised learning).


Consider k sample:
x =  x1 ; x2 ;...xn ; −1
T
Input signal: Labelled desired signal:

Integration signal: net = wT x


NN output signal: y = hardlim( wT x)

Learning law: wk +1 = wk +  (d k − yk ) xk

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 7


Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)

w
x =  x1 ; x2 ;...xn ; −1
T e

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 8


Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)/Implementation
 , Estop , E = 0, w0 ,
x1 w1 Initialization
epod max , epod = 0
x2 y E = 0, k = 0

w2  w x −
i i
epod + +
wk , k + +
… Forward
computation
xn wn  net , y Updating
law
Error kK
Learning algorithm (supervised learning).
Calculation ek = d k − yk
Consider k sample: kK 
 E = E + ek
2

x =  x1 ; x2 ;...xn ; −1
T
Input signal:
Stop checking E , epod
Integration signal: net = w x T No
Yes
NN output signal: y = hardlim( w x) T
Done
Labelled desired signal:
Learning law: wk +1 = wk +  (d k − yk ) xk
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 9
Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)/Theorem
x1 w1 Theorem: If the input data is separable, the
y LTU learning law will converge within finite
x2 iterations.
w2  w x −
i i

xn wn 

Learning algorithm (supervised learning).


Consider k sample:

x =  x1 ; x2 ;...xn ; −1
T
Input signal:

Integration signal: net = wT x


NN output signal: y = hardlim( wT x)
Labelled desired signal:
Learning law: wk +1 = wk +  (d k − yk ) xk
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 10
Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)/Example:
x1 w1 Example 1: Design a network to separate
y the following data to two distinguished
x2 class?
w2  w x −
i i

xn wn 

Learning algorithm (supervised learning).


Consider k sample:

x =  x1 ; x2 ;...xn ; −1
T
Input signal:

Integration signal: net = wT x


NN output signal: y = hardlim( wT x)
Labelled desired signal:
Learning law: wk +1 = wk +  (d k − yk ) xk
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 11
Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)/Example:
Example 2: Design a network to separate Example 3: Design a network to separate
the following data to two distinguished the following data to two distinguished
class? class?

X1 X2 d =x1 xor x2
0 0 0
0 1 1
1 0 1
1 1 0

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 12


Chapter 2: Single-layer Feedforward Neural Network
2. Linear Threshold unit (Đơn vị ngưỡng tuyến tính)/Comment:

The LTU is good for explicit classification applications.

Effectiveness of the neural network is mainly based on the characteristics of the problem,
data acquired, and “Intelligence” of the designer.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 13


Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)

Example: Classification of two sets

What will we do?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 14


Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)
x1 w1
x2 y
w2  w x −
i i

xn wn 

Integration function: linear, quadratic functions.

Activation function: S-shaped functions

It’s good for UNCERTAIN classification applications.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 15


Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)
x1 w1
x2 y
w2  w x −
i i

xn wn 

Learning algorithm (supervised learning).


Consider k sample:
x =  x1 ; x2 ;...xn ; −1
T
Input signal: Labelled desired signal:

Integration signal: net = f ( wT x )


NN output signal: y = a (net )
ek = d k − yk
 T
Learning law (delta rule):   e 
 wk +1 = wk −  ek  
 
 k
w
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 16
Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)/Implementation
x1  , Estop , E = 0, w0 ,
w1 Initialization
epod max , epod = 0
x2 y E = 0, k = 0
w2  w x −
i i
wk , E = 0, k + +
… Forward
computation
xn wn  net , y Updating
law
Learning algorithm (supervised learning). Error kK
Consider k sample: Calculation ek = d k − yk
x =  x1 ; x2 ;...xn ; −1 
T
Input signal: kK
 E = E + ek
2

Labelled desired signal:


Stop checking E , epod
Integration signal: net = f ( w x) T No
Yes
NN output signal: y = a (net )
Done
ek = d k − yk

Learning law (delta rule):   e 
T

 wk +1 = wk −  ek  
  w k 
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 17
Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)/Example 1

Example 1: Given a simple LGU network with two inputs (x1, x2) and one output (y) using
the linear integration function and unipolar sigmoid activation function (logsig), derive the
learning rule of the system?

x =  x1 ; x2 ;...xn ; −1
T

net = f ( wT x)  1
 k y = −  net
y = a (net )  1 + e
ek = d k − yk
ek = d k − yk  −  net
 T  e = e yk netk = −  e XT
  e   w k yk netk w k
( )
2
 wk +1 = wk −  ek    1 + e −  net
 
 k
w

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 18


Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)/Example 2
Example 2: Given a simple LGU network with two inputs (x1, x2) and one output (y) using
the linear integration function and tansig activation function (tansig), derive the learning
rule of the system?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 19


Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)/Example 3
Example 3: Design a proper neural network to classify the following data to two set?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 20


Chapter 2: Single-layer Feedforward Neural Network
3. Linear Graded unit (Đơn vị đánh giá tuyến tính)/Comment:

The LGU is good for uncertain classification applications (with noise…).

Effectiveness of the neural network is mainly based on the characteristics of the problem,
number data acquired (large), and “Intelligence” of the designer.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 21


Chapter 2: Single-layer Feedforward Neural Network
4. Adaline (Adaptive linear elements) - (Phần tử thích nghi tuyến tính)

Example: Build a function representing the following data?

What will we do?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 22


Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/structure

x1 w11 y1 x1 w1
w12
x2 y2 y
x2
x3
.. .. w2  w x −
i i
.. …
. . .
wnm yn
xm xn wn 

Integration function: linear, quadratic functions.

Activation function: linear functions

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 23


Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/structure
x1 w1
x2 y
w2  w x −
i i

xn wn 

Learning algorithm (supervised learning).


Consider k sample:
x =  x1 ; x2 ;...xn ; −1
T
Input signal: Labelled desired signal:

Integration signal: net = f ( wT x )


NN output signal: y = a (net ) ek = d k − yk
 T
  e 
Learning law (Widrow-Hoff rule):  wk +1 = wk −  ek  
 
 k
w
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 24
Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/Implementation
x1  , Estop , E = 0, w0 ,
w1 Initialization
epod max , epod = 0
x2 y E = 0, k = 0
w2  w x −
i i
… Forward wk , E = 0, k + +
computation
xn wn  net , y Updating
law
Learning algorithm (supervised learning). Error kK
Consider k sample: Calculation ek = d k − yk
x =  x1 ; x2 ;...xn ; −1 
T
Input signal: kK
 E = E + ek
2

Labelled desired signal:


Stop checking E , epod
Integration signal: net = f ( w x) T No
Yes
NN output signal: y = a (net )
Done
ek = d k − yk
Learning law (delta rule):  e
w
 k +1 = wk −  ek
 w k
Part I: Neural Network Presenter: Dr. Dang Xuan Ba 25
Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/Example 1

Example 1: Given a simple ADALINE network with two inputs (x1, x2) and one output (y)
using the linear integration function and linear activation function, derive the learning rule
of the system?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 26


Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/Example 2

Example 2: Design a neural network to approximate internal dynamics of DC motor?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 27


Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/Example 2

Example 3: Design a neural network to approximate the following data?

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 28


Chapter 2: Single-layer Feedforward Neural Network
4. Adaline - (Phần tử thích nghi tuyến tính)/ Comment:

The LGU is good for approximation applications (with/without noise…).

Effectiveness of the neural network is mainly based on the characteristics of the problem,
number data acquired (large), and “Intelligence” of the designer. The design could use
overdesign.

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 29


Chapter 2: Single-layer Feedforward Neural Network

END OF CHAPTER 2

Part I: Neural Network Presenter: Dr. Dang Xuan Ba 30

You might also like