0% found this document useful (0 votes)
32 views41 pages

Lecture 8 - Artificial Neural Networks

The document discusses the concept of agents in artificial intelligence, focusing on rational agents that perceive their environment and act to maximize performance based on their experiences. It introduces Artificial Neural Networks (ANN), which are inspired by the human brain's structure and function, and explains the learning process through Hebb's Rule. Additionally, it covers the architecture of neural networks, types of neural networks, and their applications in pattern classification.

Uploaded by

Saadie Essie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views41 pages

Lecture 8 - Artificial Neural Networks

The document discusses the concept of agents in artificial intelligence, focusing on rational agents that perceive their environment and act to maximize performance based on their experiences. It introduces Artificial Neural Networks (ANN), which are inspired by the human brain's structure and function, and explains the learning process through Hebb's Rule. Additionally, it covers the architecture of neural networks, types of neural networks, and their applications in pattern classification.

Uploaded by

Saadie Essie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

BUSINESS INTELLIGENCE AND

ANALYTICS
LECTURE 8: ARTIFICIAL NEURAL NETWORK (ANN)
Agents
An agent is anything that can be
viewed as perceiving its environment
through sensors and acting upon that
environment through actuators
Human agent: eyes, ears, and other
organs for sensors; hands, legs,
mouth, and other body parts for
actuators
Robotic agent: cameras and infrared
range finders for sensors, various
2
Rational agents
An agent is an entity that perceives and acts
 This course is about designing rational
agents
 Abstractly, an agent is a function from
percept histories to actions:
 [f: P*  A]

 For any given class of environments and


tasks, we seek the agent (or class of agents)
with the best performance
 Caveat: computational limitations make
perfect rationality unachievable 3

  design best program for given machine resources


Agents and environments

 The agent function maps from percept histories to actions:


 [f: P*  A]

 The agent program runs on the physical architecture to


produce f
 agent = architecture + program
4
Rational agents
 An agent should strive to "do the right thing",
based on what it can perceive and the
actions it can perform. The right action is the
one that will cause the agent to be most
successful
 Performance measure: An objective criterion
for success of an agent's behavior
 E.g., performance measure of a vacuum-
cleaner agent could be amount of dirt
cleaned up, amount of time taken, amount of
5

electricity consumed, amount of noise


Rational agents

 RationalAgent: For each possible percept sequence,


a rational agent should select an action that is
expected to maximize its performance measure,
given the evidence provided by the percept sequence
and whatever built-in knowledge the agent has.

6
Rational agents
 Rationalityis distinct from omniscience (all-
knowing with infinite knowledge)
 Agents can perform actions in order to modify
future percepts so as to obtain useful information
(information gathering, exploration)
 An agent is autonomous if its behavior is
determined by its own experience (with ability to
learn and adapt)

7
Introduction to ANN
1. Has matured over the years
2. Has now been made more possible because
of the high computing power available in
this day and age
3. Has a high application potential
Introduction
 ANN derived from the human brain and nervous
system
 The human brain consists of a massive parallel
interconnections of neurons
 Brain
allows us to achieve perceptual and
recognition tasks over a small period of time
 Inspired
researchers to find out if there was a
way to mimic the nervous system in the form of
a machine
Introduction

 Network is a massively parallel network neurons.


 Contains 10 billion neurons and 60 trillion
interconnections between the neurons
 The fast processing speed of a human lies in this
 Itis however not possible to mimic all these processing
units in a machine.
 The best we can do is mimic a very small part of it and
only to do a very specific task.
Introduction
Introduction
Introduction
Biological Neuron
Biological Neuron
1. Axon: act as transmission lines (transmission of
electric signal)
2. Synapses: tree like structure. Make connections with
other terminals. Holds output of current neuron
3. Dendrites: output part from previous synapses.
Holds input of current neuron
4. Signal strength: the neuron decides on the
response/output signal based on the signal strength
of the input.
A Simple Model of a Neuron
(Perceptron)
y1
w1j

w2j
y2
w3j

y3
 O

yi
wij
• Each neuron has a threshold value
• Each neuron has weighted inputs from other neurons
• The input signals form a weighted sum
• If the activation level exceeds the threshold, the neuron
“fires”
 Severalkey features of the processing
elements in ANN are suggested by the
properties of biological neurons, i.e.
 The processing element receives many signals
 Signalsmay be modified by a weight at the
receiving synapse.
 The processing element sums the weighted input
 With
sufficient input, the neuron transmits a single
output
Artificial Neural Networks
 Topology of a NN refers to its framework as
well as its interconnection scheme.
 Framework is specified by layers;
 Input layer
 Hidden layer
 Output layer
Supervised Training
 Both the inputs and the outputs are provided
 Thenetwork then processes the inputs and
compares its resulting outputs against the
desired outputs.
 Errors
are then propagated back through the
system, causing the system to adjust the
weights which control the network.
 This
process occurs over and over as the
weights are continually tweaked.
TYPES OF NEURAL NETWORKS
TYPES OF NEURAL NETWORKS
APPLICATION OF NEURAL
NETWORKS
APPLICATION OF NEURAL
NETWORKS
Pattern Classification by
Artificial Neural Networks
Examples
Artificial Neural Networks Classification
Model
 Pattern classification
 Eachinput vector (pattern) belongs or does
not belong
 Example a single class membership
 Membership (1 response )
 Non Membership (-1/0 response )
Hebb's Rule
Hebb’s Rule describes how when a cell
persistently activates another nearby cell, the
connection between the two cells becomes
stronger.
Specifically, when Neuron A axon repeatedly
activates neuron B’s axon, a growth process
occurs that increases how effective neuron A
is in activating neuron B.
As a result, the connection between those two
neurons is strengthened over time.
Hebb's Rule
 Thefirst, and undoubtedly the best known, learning rule
was introduced by Donald Hebb. The description
appeared in his book The Organization of Behavior in
1949.

 Thetheory is often summarized as: "Cells that fire


together, wire together." It attempts to explain
"associative learning", in which simultaneous
activation of cells leads to pronounced increases
in synaptic strength between those cells.
Hebb's Rule

Hebbian Learning Rule Algorithm :


 Set all weights to zero, wi = 0 for i=1 to n, and bias to zero.
 For each input vector, S(input vector) : t(target output pair),
repeat steps 3-5.
 Set activations for input units with the input vector Xi = Si for i =
1 to n.
 Set the corresponding output value to the output neuron, i.e. y
= t.
 Update weight and bias by applying Hebb rule for all i = 1 to n:
Hebb's Rule
Wi (new)= Wi (old)+ Xit (i=1…n)
Example 1

There are 4 training samples, so there will be 4 iterations.


Also, the activation function used here is Bipolar Sigmoidal
Function so the range is [-1,1].
Example 1
Step 1 :
Set weight and bias to zero, w = [ 0 0 0 ]T and
b = 0.
Step 2 :
Set input vector Xi = Si for i = 1 to 4.
X1 = [ -1 -1 1 ]T
X2 = [ -1 1 1 ]T
X3 = [ 1 -1 1 ]T
X4 = [ 1 1 1 ] T
Step 3 :
Output value is set to y = t.
Example 1
Step 1 :
Set weight and bias to zero, w = [ 0 0 0 ]T and
b = 0.
Step 2 :
Set input vector Xi = Si for i = 1 to 4.
X1 = [ -1 -1 1 ]T
X2 = [ -1 1 1 ]T
X3 = [ 1 -1 1 ]T
X4 = [ 1 1 1 ] T
Step 3 :
Output value is set to y = t.
Example 1
Step 4 :
Modifying weights using Hebbian Rule:
First iteration –
w(new) = w(old) + x1y1 = [ 0 0 0 ]T + [ -1 -1 1 ]T . [ -1 ] = [ 1
1 -1 ]T
For the second iteration, the final weight of the first one will
be used and so on.
Second iteration –
w(new) = [ 1 1 -1 ]T + [ -1 1 1 ]T . [ -1 ] = [ 2 0 -2 ]T
Third iteration –
w(new) = [ 2 0 -2]T + [ 1 -1 1 ]T . [ -1 ] = [ 1 1 -3 ]T
Fourth iteration –
w(new) = [ 1 1 -3]T + [ 1 1 1 ]T . [ 1 ] = [ 2 2 -2 ]T
T
Example 1
Example 1

For x1 = -1, x2 = -1, b = 1, Y = (-1)(2) + (-1)(2) +


(1)(-2) = -6
For x1 = -1, x2 = 1, b = 1, Y = (-1)(2) + (1)(2) + (1)
(-2) = -2
For x1 = 1, x2 = -1, b = 1, Y = (1)(2) + (-1)(2) + (1)
(-2) = -2
For x1 = 1, x2 = 1, b = 1, Y = (1)(2) + (1)(2) + (1)(-
2) = 2
The results are all compatible with the original
table.
Example 1
AND

Change in Wi=
Inputs AND xit New Wi

x1 x2 x0 t w1 w2 w0 w1 w2 w0

0 0 0

1 1 1 1 1 1 1 1 1 1

1 -1 1 -1 -1 1 -1 0 2 0

-1 1 1 -1 1 -1 -1 1 1 -1

-1 -1 1 -1 1 1 -1 2 2 -2
Example 1
 Final weights
 Test with inputs

Wi

w1 w2 w0

2 2 -2
Example 2

 X and O

# . . . # # # #
. # . # . # . #
. . # . . # . . #
. # . # . # . . #
# . . . # # # #
Example 3

 J and P

# # # # # # # # # .

. . . # . # . . . #

. . . # . # # # # .

. . . # . # . . . .

# # # . . # . . . .

You might also like