RBF Elm PNN-2020

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

RBF, ELM and PNN

Neural Networks
2020
Dr. Mohamed Waleed Fakhr
m
f (x) =  wii (x)
i =1

Radial basis function RBF Model

y
Linearly
Output
weighted
Units
output

w1 w2 wm
• Decomposition
Hidden
Units
1 2 m • Feature Extraction
• Transformation

Input Feature Vector


x = x1 x2 xn
m
f (x) =  wii (x)
i =1

RBF Model
y
Linearly
Output
weighted
Units
output

w1 w2 wm
• Decomposition
Hidden
Units
1 2 m • Feature Extraction
• Transformation

Input Feature Vector


x = x1 x2 xn
m
f (x) =  wii (x)
i =1
Radial Basis Function Networks as
Universal Aproximators

w1 w2 wm With sufficient number of


Hidden radial-basis-function units,
Units
1 2 m it can also be a universal
approximator.

x = x1 x2 xn
m
f (x) =  wii (x)
i =1
Non-Linear Models: we adapt the weights and the
RBF parameters

Weights
m
f (x) =  wii (x)
i =1
m
f (x) =  wii (x)
i =1

Radial Basis Functions

Three parameters
for a radial function:
i(x)= (||x − xi||)
⚫ Center xi
⚫ Distance Measure r = ||x − xi||
⚫ Shape 
− r2

 (r ) = e 2 2
  0 and r  

Gaussian Basis Function (=0.5,1.0,1.5)

 = 1.5
 = 1.0
 = 0.5
In Matlab

⚫ newrbe: uses all the training patterns as


Gaussian centers. Calculates the weights by
solving a least squares simple problem
⚫ newrb: The design method of newrb is
similar to that of newrbe. The difference is
that newrb creates neurons one at a time.
⚫ At each iteration the input vector that results
in lowering the network error the most is
used to create a neuron.
newrb vs newrbe

⚫ The error of the new network is checked, and


if low enough newrb is finished. Otherwise
the next neuron is added. This procedure is
repeated until the error goal is met or the
maximum number of neurons is reached
⚫ We will do our own implementation of the
RBF neural network and the Extreme
Learning Machine neural network (ELM)
which appeared in 2012.

You might also like