0% found this document useful (0 votes)
34 views35 pages

Artificial Neural Networks: Perceptron

The document discusses artificial neural networks and perceptrons. It provides background on neural networks, explaining that they are programming methods inspired by biological neural systems that are capable of learning patterns. It then discusses applications of neural networks like pattern recognition, classification, prediction and optimization. The document goes on to explain the biological foundations of neural networks in the human brain and nervous system. It introduces the basic concepts of artificial neurons, perceptrons, and McCulloch-Pitts neurons. Perceptrons are the basic processing units of neural networks, which take weighted inputs, sum them, and output a value based on an activation threshold. The document provides examples of how perceptrons operate in their training and use modes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views35 pages

Artificial Neural Networks: Perceptron

The document discusses artificial neural networks and perceptrons. It provides background on neural networks, explaining that they are programming methods inspired by biological neural systems that are capable of learning patterns. It then discusses applications of neural networks like pattern recognition, classification, prediction and optimization. The document goes on to explain the biological foundations of neural networks in the human brain and nervous system. It introduces the basic concepts of artificial neurons, perceptrons, and McCulloch-Pitts neurons. Perceptrons are the basic processing units of neural networks, which take weighted inputs, sum them, and output a value based on an activation threshold. The document provides examples of how perceptrons operate in their training and use modes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

ARTIFICIAL NEURAL

NETWORKS
PERCEPTRON
INTRODUCTION

 What are Neural Networks?


 Neural networks are a method of programming computers for learning
 They are exceptionally good at performing pattern recognition and other tasks
that are very difficult to program using conventional techniques.
 Programs that employ neural nets are also capable of learning on their own and
adapting to changing conditions.

ANN Oct 2017 2


ANN APPLICATIONS

 Pattern recognition: Does that image contain a face?


 Classification problems: Is this cell defective?
 Prediction: Given these symptoms, the patient has disease X
 Forecasting: predicting behavior of stock market
 Handwriting: is character recognized?
 Optimization: Find the shortest path for the TSP.

ANN Oct 2017 3


ROOTS OF NEURAL NETWORKS
 Neurobiological studies:
• How do nerves behave when stimulated by different magnitudes of electric current?

• Is there a minimal threshold needed for nerves to be activated?

• How do different nerve cells communicate among each other?


 Psychological studies:

• How do animals learn, forget, recognize and perform various types of tasks?
 Psycho-physical: experiments help to understand how individual neurons and groups of neurons work.
 McCulloch and Pitts introduced the first mathematical model of single neuron, widely applied in subsequent
work.
ANN Oct 2017 4
BACKGROUND

 An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the
biological nervous systems, such as the human brain’s information processing mechanism.
 The key element of this paradigm is the novel structure of the information processing system.
 It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve
specific problems.
 ANNs, like people, learn by example.
 An ANN is configured for a specific application, such as pattern recognition or data classification,
through a learning process.
 Learning in biological systems involves adjustments to the synaptic connections that exist between the
neurons.
 This is true of ANNs as well.

ANN Oct 2017 5


BIOLOGICAL NEURONS
 Human information processing system consists
of brain neuron: basic building block
 Cell that communicates information to and from
various parts of body
 Simplest model of a neuron: considered as a
threshold unit –a processing element (PE)
 Collects inputs & produces output if the sum of
the input exceeds an internal threshold value
 There are 86 Billion neurons in a human brain

ANN Oct 2017 6


HOW THE HUMAN BRAIN LEARNS

 In the human brain, a typical neuron collects signals from others through a host of fine structures called
dendrites.
 The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits
into thousands of branches.
 At the end of each branch, a structure called a synapse converts the activity from the axon into electrical
effects that inhibit or excite activity in the connected neurons.

ANN Oct 2017 7


NEURON MODEL
 When a neuron receives excitatory
input that is sufficiently large compared
with its inhibitory input, it sends a spike
of electrical activity down its axon.
 Learning occurs by changing the
effectiveness of the synapses so that
the influence of one neuron on another
changes.
 If the outcome is classified as
 Firing: 1
 Idle: 0
ANN Oct 2017 8
PERCEPTRON
THE BASIC ANN

ANN Oct 2017 9


ANN
 Many neuron-like Processing Elements (PE)
 Input & output units receive and broadcast
signals to the environment, respectively
 Internal units called hidden units since they are
not in contact with external environment
 units connected by weighted links (synapses)
 A parallel computation system because
 Signals travel independently on weighted
channels & units can update their state in
parallel
 However, most ANNs can be simulated in
serial computers

ANN Oct 2017 10


PROPERTIES OF ANN

 Output is discrete or real-valued


 Output is a vector of values – called the weight vector
 Robust to noisy data

ANN Oct 2017 11


PERCEPTRON

 An artificial neuron is a device with


many inputs and one output.
 The neuron has two modes of
operation;
 the training mode and
 the using mode.

ANN Oct 2017 12


PERCEPTRON

 In the training mode, the neuron can be trained to fire (or not), for particular input
patterns.
 In the using mode, when a taught input pattern is detected at the input, its associated output
becomes the current output.
 If the input pattern does not belong in the taught list of input patterns, the firing rule is used to
determine whether to fire or not.
 The firing rule is an important concept in neural networks and accounts for their high
flexibility.
 A firing rule determines how one calculates whether a neuron should fire for any input pattern.
 It relates to all the input patterns, not only the ones on which the node was trained on previously.

ANN Oct 2017 13


MC PERCEPTRON

 A more sophisticated Neuron is know as the


McCulloch and Pitts model (MCP).
 The difference is that in the MCP model, the
inputs are weighted and the effect that each
input has at decision making, is dependent on
the weight of the particular input.
 The weight of the input is a number which is
multiplied with the input to give the weighted
input.

ANN Oct 2017 14


MC PERCEPTRON

 The weighted inputs are then added together and if they exceed a pre-set threshold
value, the perceptron fires.
 Otherwise it will not fire and the inputs tied to that perceptron will not have any
effect on the decision making.
 In mathematical terms, the neuron fires if and only if;
x1w1 +x2w2 + ⋯ + 𝒙𝒏 𝒘𝒏 > 𝑻

ANN Oct 2017 15


MC PERCEPTRON

 The MCP neuron has the ability to adapt to a particular


situation by changing its weights and/or threshold.
 Various algorithms exist that cause the neuron to 'adapt'; the
most used ones are the Delta rule and the back error
propagation.
 An ANN learns a weight vector
PERCEPTRON

ANN Oct 2017 17


PERCEPTRON

ANN Oct 2017 18


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 ≤ 𝜃
𝑦 𝜑 =ቊ
𝑥1 𝑤1 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 > 𝜃
𝑖=1

∑ 𝜑
𝑥2 𝑤2

ANN Oct 2017 19


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 ≤ 𝜃
𝑥1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 > 𝜃
=1
𝑖=1

∑ 𝜑
𝑥2 𝑤2 = 0.5
=1

ANN Oct 2017 20


PERCEPTRON FIRING
Let 𝜽 = 𝟎. 𝟓
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 𝜃
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 1 × 0.5 + 1 × 0.5 1, 𝑛𝑒𝑡 > 𝜃
=1
𝒏𝒆𝒕 = 𝟎. 𝟓 + 𝟎. 𝟓 = 𝟏
∑ 𝜑
𝑥2 𝑤2 = 0.5
=1

ANN Oct 2017 21


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 1 × 0.5 + 1 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=1
𝑛𝑒𝑡 = 0.5 + 0.5 = 1 𝒚 𝝋 =𝟏
∑ 𝜑
𝑥2 𝑤2 = 0.5
=1

The perceptron has Fired

ANN Oct 2017 22


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 < 0.5


𝑦 𝜑 =ቊ
𝑥1 𝑤1 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 ≥ 0.5
𝑖=1

∑ 𝜑
𝑥2 𝑤2

ANN Oct 2017 23


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 > 0.5
=0
𝑖=1

∑ 𝜑
𝑥2 𝑤2 = 0.5
=1

ANN Oct 2017 24


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 0 × 0.5 + 1 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=0
𝒏𝒆𝒕 = 𝟎 + 𝟎. 𝟓 = 𝟎. 𝟓
∑ 𝜑
𝑥2 𝑤2 = 0.5
=1

ANN Oct 2017 25


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 0 × 0.5 + 1 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=0
𝑛𝑒𝑡 = 0 + 0.5 = 0.5 𝒚 𝝋 =𝟎
∑ 𝜑
𝑥2 𝑤2 = 0.5
=1

The perceptron has NOT Fired

ANN Oct 2017 26


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 < 0.5


𝑦 𝜑 =ቊ
𝑥1 𝑤1 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 ≥ 0.5
𝑖=1

∑ 𝜑
𝑥2 𝑤2

ANN Oct 2017 27


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 > 0.5
=1
𝑖=1

∑ 𝜑
𝑥2 𝑤2 = 0.5
=0

ANN Oct 2017 28


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 1 × 0.5 + 0 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=1
𝒏𝒆𝒕 = 𝟎. 𝟓 + 𝟎 = 𝟎. 𝟓
∑ 𝜑
𝑥2 𝑤2 = 0.5
=0

ANN Oct 2017 29


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 1 × 0.5 + 0 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=1
𝑛𝑒𝑡 = 0.5 + 0 = 0.5 𝒚 𝝋 =𝟎
∑ 𝜑
𝑥2 𝑤2 = 0.5
=0

The perceptron has NOT Fired

ANN Oct 2017 30


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 < 0.5


𝑦 𝜑 =ቊ
𝑥1 𝑤1 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 ≥ 0.5
𝑖=1

∑ 𝜑
𝑥2 𝑤2

ANN Oct 2017 31


PERCEPTRON FIRING

𝑛 0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖 1, 𝑛𝑒𝑡 > 0.5
=0
𝑖=1

∑ 𝜑
𝑥2 𝑤2 = 0.5
=0

ANN Oct 2017 32


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 0 × 0.5 + 0 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=0
𝒏𝒆𝒕 = 𝟎 + 𝟎 = 𝟎
∑ 𝜑
𝑥2 𝑤2 = 0.5
=0

ANN Oct 2017 33


PERCEPTRON FIRING
𝑛

𝑛𝑒𝑡 = ෍ 𝑥𝑖 . 𝑤𝑖
0, 𝑛𝑒𝑡 ≤ 0.5
𝑥1 𝑖=1 𝑦 𝜑 =ቊ
𝑤1 = 0.5 𝑛𝑒𝑡 = 0 × 0.5 + 0 × 0.5 1, 𝑛𝑒𝑡 > 0.5
=0
𝑛𝑒𝑡 = 0 + 0 = 0 𝒚 𝝋 =𝟎
∑ 𝜑
𝑥2 𝑤2 = 0.5
=0

The perceptron has NOT Fired

ANN Oct 2017 34


PERCEPTRON FIRING

𝒙𝟏 𝒙𝟐 𝒚
1 1 1
0 1 0
1 0 0
1 1 1

𝒘𝟏 = 𝟎. 𝟓, 𝒘𝟐 = 𝟎. 𝟓
The Concept is AND Gate.

ANN Oct 2017 35

You might also like