NN 02
NN 02
)(Neural Networks
2
ﺷﺒﮑﻪ ﻋﺼﺒﯽ ﭼﯿﺴﺖ؟
ﺷﺒﮑﻪ ﻋﺼﺒﯽ ﻣﺼﻨﻮﻋﯽ روﺷﯽ ﻋﻤﻠﯽ ﺑﺮاي ﯾﺎدﮔﯿﺮي ﺗﻮاﺑﻊ ﮔﻮﻧﺎﮔﻮن ﻧﻈﯿﺮ
3
ﺷﺒﮑﻪ ﻋﺼﺒﯽ ﭼﻪ ﻗﺎﺑﻠﯿﺖﻫﺎﯾﯽ دارد؟
ﻣﺤﺎﺳﺒﻪ ﯾﮏ ﺗﺎﺑﻊ ﻣﻌﻠﻮم
ﺷﻨﺎﺳﺎﯾﯽ اﻟﮕﻮ
ﭘﺮدازش ﺳﯿﮕﻨﺎل
ﯾﺎدﮔﯿﺮي
4
ﻣﺴﺎﺋﻞ ﻣﻨﺎﺳﺐ ﺑﺮاي ﯾﺎدﮔﯿﺮي ﺷﺒﮑﻪﻫﺎي ﻋﺼﺒﯽ
ﺧﻄﺎ در دادهﻫﺎي آﻣﻮزﺷﯽ وﺟﻮد داﺷﺘﻪ ﺑﺎﺷﺪ ،ﻣﺜﻞ ﻣﺴﺎﺋﻠﯽ ﮐﻪ دادهﻫﺎي
ﻣﻮاردي ﮐﻪ ﻧﻤﻮﻧﻪﻫﺎ ﺗﻮﺳﻂ ﻣﻘﺎدﯾﺮ زﯾﺎدي زوج )وﯾﮋﮔﯽ ،ﻣﻘﺪار( ﻧﺸﺎن داده
5
ﻣﺴﺎﺋﻞ ﻣﻨﺎﺳﺐ ﺑﺮاي ﯾﺎدﮔﯿﺮي ﺷﺒﮑﻪﻫﺎي ﻋﺼﺒﯽ
زﻣﺎن ﮐﺎﻓﯽ ﺑﺮاي ﯾﺎدﮔﯿﺮي وﺟﻮد داﺷﺘﻪ ﺑﺎﺷﺪ .اﯾﻦ روش در ﻣﻘﺎﯾﺴﻪ ﺑﺎ روشﻫﺎي
دﯾﮕﺮ ﻧﻈﯿﺮ درﺧﺖ ﺗﺼﻤﯿﻢ ﻧﯿﺎز ﺑﻪ زﻣﺎن ﺑﯿﺸﺘﺮي ﺑﺮاي ﯾﺎدﮔﯿﺮي دارد.
ﻧﯿﺎزي ﺑﻪ ﺗﻌﺒﯿﺮ ﺗﺎﺑﻊ ﻫﺪف ﻧﺒﺎﺷﺪ ،زﯾﺮا ﺑﻪ ﺳﺨﺘﯽ ﻣﯽﺗﻮان وزنﻫﺎي ﯾﺎدﮔﺮﻓﺘﻪ ﺷﺪه
6
Early learning algorithms
• Designed for single layer neural networks
• Generally more limited in their applicability
• Some of them are
• Perceptron learning
• LMS or Widrow- Hoff learning
• Grossberg learning
7
ﭘﺮﺳﭙﺘﺮون
ﻧﻮﻋﯽ از ﺷﺒﮑﻪ ﻋﺼﺒﯽ ﺑﺮﻣﺒﻨﺎي ﯾﮏ واﺣﺪ ﻣﺤﺎﺳﺒﺎﺗﯽ ﺑﻪ ﻧﺎم ﭘﺮﺳﭙﺘﺮون ﺳﺎﺧﺘﻪ
ﻣﯽﺷﻮد .ﯾﮏ ﭘﺮﺳﭙﺘﺮون ﺑﺮداري از وروديﻫﺎي ﺑﺎ ﻣﻘﺎدﯾﺮ ﺣﻘﯿﻘﯽ را ﮔﺮﻓﺘﻪ و ﯾﮏ
ﺗﺮﮐﯿﺐ ﺧﻄﯽ از اﯾﻦ وروديﻫﺎ را ﻣﺤﺎﺳﺒﻪ ﻣﯽﮐﻨﺪ.
اﮔﺮ ﻧﺘﯿﺠﻪ ﺣﺎﺻﻞ از ﯾﮏ ﻣﻘﺪار آﺳﺘﺎﻧﻪ ﺑﯿﺸﺘﺮ ﺑﻮد ﺧﺮوﺟﯽ ﭘﺮﺳﭙﺘﺮون ﺑﺮاﺑﺮ ﺑﺎ 1و در
ﻏﯿﺮ اﯾﻦﺻﻮرت ﻣﻌﺎدل ) -1ﯾﺎ ﺻﻔﺮ( ﺧﻮاﻫﺪ ﺑﻮد.
8
ﭘﺮﺳﭙﺘﺮون
9
اﺿﺎﻓﻪ ﮐﺮدن ﺑﺎﯾﺎس
اﻓﺰودن ﺑﺎﯾﺎس ﻣﻮﺟﺐ ﻣﯽﺷﻮد ﺗﺎ اﺳﺘﻔﺎده از ﺷﺒﮑﻪ ﭘﺮﺳﭙﺘﺮون ﺑﺎ ﺳﻬﻮﻟﺖ ﺑﯿﺸﺘﺮي
ﻣﻘﺪور ﺑﺎﺷﺪ.
ﺑﺮاي اﯾﻨﮑﻪ ﺑﺮاي ﯾﺎدﮔﯿﺮي ﺑﺎﯾﺎس ﻧﯿﺎزي ﺑﻪ اﺳﺘﻔﺎده از ﻗﺎﻧﻮن دﯾﮕﺮي ﻧﺪاﺷﺘﻪ
ﺑﺎﺷﯿﻢ ،ﺑﺎﯾﺎس را ﺑﻪ ﺻﻮرت ﯾﮏ ورودي ﺑﺎ ﻣﻘﺪار ﺛﺎﺑﺖ 1در ﻧﻈﺮ ﮔﺮﻓﺘﻪ و وزن b
را ﺑﻪ آن اﺧﺘﺼﺎص ﻣﯽدﻫﯿﻢ.
10
Perceptron Geometric View
The equation below describes a (hyper-)plane in the input space
consisting of real valued m-dimensional vectors. The plane splits the
input space into two regions, each of them describing one class.
decision
region for C1
p2
w1p1 + w2p2 + b > 0
decision
boundary C1
p1
C2
w1p1 + w2p2 + b < 0
w1p1 + w2p2 + b = 0
11
Two-Input Case
12
Apple/Banana Sorter
13
Class Representation
Perceptron Network
Hamming Network
Hopfield Network
14
McCulloch-Pitts Perceptron
1
1
1
15
Apple/Banana Example
16
Testing the Network
17
XOR problem
A typical example of non-linearly separable function is the
XOR. This function takes two input arguments with values in
{-1,1} and returns one output in {-1,1}, as specified in the
following table:
x1 x2 x1 ⊗ x2
-1 -1 -1
-1 1 1
1 -1 1
1 1 -1
x2
1
-1 1
x1
-1
19
Supervised Learning
Network is provided with a set of examples of proper network
behavior (inputs/targets)
20
Perceptron Architecture AGAIN
21
Single-Neuron Perceptron
22
Decision Boundary
23
Example - OR
24
OR Solution
0.5
0.5
25
Multiple-Neuron Perceptron
Each neuron will have its own decision boundary.
T
iw p + bi = 0
27
Starting Point
28
Tentative Learning Rule
29
Second Input Vector
30
Third Input Vector
31
Unified Learning Rule
32
Multiple-Neuron Perceptrons
33
Apple/Banana Example
34
Second Iteration
35
Check
36
Perceptron Rule Capability
37
Rosenblatt’s single layer perceptron is trained as
follow:
wij (k + 1) = wij (k ) + η pi (k ) e j (k )
39
Example 1
40
Example 1
41
Example 1
42
Example 1
43
Example 1
44
Example 1
45
Example 1
46
Example 2
47
Example 2
48
Example 2
49
Example 2
50
Example 2
51
Example 2
52
Perceptron Limitations
53
Hamming Network
54
Feedforward Layer
55
Recurrent Layer
56
Hamming Operation
57
Hamming Operation
58
Hopfield Network
59
Apple/Banana Problem
60
Summary
Perceptron
Feedforward Network
Linear Decision Boundary
One Neuron for Each Decision
Hamming Network
Competitive Network
First Layer – Pattern Matching (Inner Product)
Second Layer – Competition (Winner-Take-All)
# Neurons = # Prototype Patterns
Hopfield Network
Dynamic Associative Memory Network
Network Output Converges to a Prototype Pattern
# Neurons = # Elements in each Prototype Pattern
61