0% found this document useful (0 votes)
5 views

Artificial neural networks (II) (Part I)

Uploaded by

micheal amash
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Artificial neural networks (II) (Part I)

Uploaded by

micheal amash
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Artificial Intelligence

Part 2: Artificial neural networks (II)

Dr. Adham Alsharkawi


Academic year 2023-2024 1
Can a single neuron learn a task?
In 1958, Frank Rosenblatt introduced a training algorithm that
provided the first procedure for training a simple ANN: a
perceptron (Rosenblatt, 1958).

The perceptron is the simplest form of a neural network. It


consists of a single neuron with adjustable weights and a hard
limiter. A single-layer two-input perceptron is shown next.

2
3
The perceptron
The operation of Rosenblatt’s perceptron is based on the neuron
model previously discussed. The model consists of a linear
combiner followed by a hard limiter.

The weighted sum of the inputs is applied to the hard limiter,


which produces an output equal to +1 if its input is positive
and −1 if it is negative.

The aim of the perceptron is to classify inputs, or in other


words externally applied stimuli 𝑥1, 𝑥2, . . . , 𝑥𝑛, into one of two
classes, say 𝐴1 and 𝐴2.

4
Thus, in the case of an elementary perceptron, the n-
dimensional space is divided by a hyperplane into two
decision regions.

The hyperplane is defined by the linearly separable function:


𝑛

෍ 𝑥𝑖 𝑤𝑖 − 𝜃 = 0
𝑖=1

5
For the case of two inputs, 𝑥1 and 𝑥2, the decision boundary
takes the form of a straight line shown in bold in the figure
below.

Point 1, which lies above the


boundary line, belongs to
class 𝐴1; and point 2, which
lies below the line, belongs
to class 𝐴2. The threshold 𝜃
can be used to shift the
decision boundary.

6
With three inputs the hyperplane can still be visualised.

7
But how does the perceptron learn its classification
tasks?

This is done by making small adjustments in the weights to


reduce the difference between the actual and desired outputs of
the perceptron.

The initial weights are randomly assigned, usually in the range


−0.5,0.5 , and then updated to obtain the output consistent
with the training examples.

8
The perceptron training algorithm for classification tasks

9
Perceptron learning rule

Learning rate, a positive constant less than unity

10
Can we train a perceptron to perform basic logical
operations such as AND, OR or Exclusive-OR?

Let us consider the operation AND.

11
12

You might also like