0% found this document useful (0 votes)
11 views18 pages

Artificial Neural Networks (I)

Uploaded by

micheal amash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views18 pages

Artificial Neural Networks (I)

Uploaded by

micheal amash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Artificial Intelligence

Part 2: Artificial neural networks (I)

Dr. Adham Alsharkawi


Academic year 2023-2024 1
How do artificial neural nets model the brain?
An artificial neural network consists of a number of very simple
and highly interconnected processors, also called neurons, which
are analogous to the biological neurons in the brain.

2
The neurons are connected by weighted links passing signals
from one neuron to another.
Weighted links

3
Each neuron receives a number of input signals through its
connections; however, it never produces more than a single
output signal.

4
The output signal is transmitted through the neuron’s outgoing
connection.

5
The outgoing connection, in turn, splits into a number of
branches that transmit the same signal (the signal is not divided
among these branches in any way).

6
The outgoing branches terminate at the incoming connections
of other neurons in the network.

7
How does an artificial neural network ‘learn’?
The neurons are connected by links, and each link has a
numerical weight associated with it. Weights are the basic
means of long-term memory in ANNs.

They express the strength, or in other words importance, of


each neuron input.

A neural network ‘learns’ through repeated adjustments of these


weights.

8
Does the neural network know how to adjust the
weights?
A typical ANN is made up of a hierarchy of layers, and the
neurons in the networks are arranged along these layers.

The neurons connected to the external environment form input


and output layers. The weights are modified to bring the
network input/output behaviour into line with that of the
environment.

9
Each neuron is an elementary information-processing unit. It
has a means of computing its activation level given the inputs
and numerical weights.

To build an artificial neural network, we must decide first how


many neurons are to be used and how the neurons are to be
connected to form a network. In other words, we must first
choose the network architecture.

10
Then we decide which learning algorithm to use. And finally
we train the neural network, that is, we initialise the weights of
the network and update the weights from a set of training
examples.

Let us begin with a neuron, the basic building element of an


ANN.

11
The neuron as a simple computing element
A neuron receives several signals from its input links, computes
a new activation level and sends it as an output signal through
the output links.

The input signal can be raw data or outputs of other neurons.


The output signal can be either a final solution to the problem
or an input to other neurons.

12
13
How does the neuron determine its output?

The neuron computes the weighted sum of the input signals and
compares the result with a threshold value, θ.

If the net input is less than the threshold, the neuron output
is -1.

But if the net input is greater than or equal to the threshold, the
neuron becomes activated and its output attains a value +1.

14
In other words, the neuron uses the following transfer or
activation function:
𝑛

𝑋 = ෍ 𝑥𝑖 𝑤𝑖
𝑖=1

+1 𝑖𝑓 𝑋 ≥ 𝜃
𝑌=ቊ
−1 𝑖𝑓 𝑋 < 𝜃
where 𝑋 is the net weighted input to the neuron, 𝑥𝑖 is the value
of input 𝑖, 𝑤𝑖 is the weight of input 𝑖, 𝑛 is the number of neuron
inputs, and 𝑌 is the output of the neuron.

15
This type of activation function is called a sign function. Thus
the actual output of the neuron with a sign activation function
can be represented as:

𝑌 = 𝑠𝑖𝑔𝑛 ෍ 𝑥𝑖 𝑤𝑖 − 𝜃
𝑖=1

16
Is the sign function the only activation function used by
neurons?
Many activation functions have been tested, but only a few
have found practical applications.

17
The step and sign activation functions, also called hard limit
functions, are often used in decision-making neurons for
classification and pattern recognition tasks.

The sigmoid function transforms the input, which can have any
value between plus and minus infinity, into a reasonable value
in the range between 0 and 1. Neurons with this function are
used in the back-propagation networks.

The linear activation function provides an output equal to the


neuron weighted input. Neurons with the linear function are
often used for linear approximation.

18

You might also like