0% found this document useful (0 votes)
14 views26 pages

Introduction To Neural Networks - Single Layer Perceptrons - Modified

The document introduces artificial neural networks, focusing on the single-layer perceptron, which consists of an input layer and an output node. It explains how weights are adjusted during learning to minimize prediction errors and discusses the limitations of single-layer perceptrons in handling complex, non-linear problems. Additionally, it highlights the advantages of multilayer neural networks, which can model intricate patterns through hidden layers and feed-forward architectures.

Uploaded by

devanand272003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views26 pages

Introduction To Neural Networks - Single Layer Perceptrons - Modified

The document introduces artificial neural networks, focusing on the single-layer perceptron, which consists of an input layer and an output node. It explains how weights are adjusted during learning to minimize prediction errors and discusses the limitations of single-layer perceptrons in handling complex, non-linear problems. Additionally, it highlights the advantages of multilayer neural networks, which can model intricate patterns through hidden layers and feed-forward architectures.

Uploaded by

devanand272003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Introduction to neural networks –

Single layer perceptron

Mr. Sivadasan E T
Associate Professor
Vidya Academy of Science and Technology, Thrissur
Introduction to Artificial Neural networks
The human nervous system contains
cells(nodes), which are referred to as neurons.

The neurons are connected to one another


with the use of axons(output) and dendrites
(input), and

The connecting regions between axons and


dendrites are referred to as synapses(weights).
Biological and Artificial Neural N/w.
Introduction to Artificial Neural networks
Neural networks Architecture
Introduction to Artificial Neural networks

An artificial neural networks:

Calculate a function of the inputs by passing


values from the input neurons to the output
neuron(s).

Intermediate parameters in the network are


called weights.
Introduction
Learning occursto
by Artificial Neural
changing the networks
weights
connecting the neurons.

External stimuli are needed for learning in


biological organisms.

The external stimulus in artificial neural networks


is provided by the training data containing
examples of input-output pairs of the function to
be learned.
Introduction to Artificial Neural networks
The weights between neurons are adjusted
repeatedly for each input-output pair.

With each adjustment, the function computed


by the neural network improves over time.

As a result, the network improves at mapping


inputs to outputs.
Introduction to Artificial Neural networks
The weights between neurons are adjusted in a
neural network in response to prediction errors.

This ability to accurately compute functions of


unseen inputs by training over a finite set of
input-output pairs is referred to as model
generalization.
Introduction to Artificial Neural networks

Deep learners become more attractive than


conventional methods primarily when sufficient
data/computational power is available.
Single Computational Layer: The Perceptron

The simplest neural network is referred to as the


perceptron. This neural network contains a single
input layer and an output node.
Introduction to Artificial Neural networks

The input layer contains d nodes that transmit the


d features X = [x1 . . . xd] with edges of weight W =
[w 1 . . . w d] to an output node.

The input layer does not perform any


computation in its own.
Linear function at the node

The linear function is computed at the output


node.
Bias

If there is any error during the prediction by the


function, bias ‘b’ can be added to the output
values for obtaining the true values.

Thus, the neural network would compute the


function y=f(x) + b which includes all the
predictions by the neural network shifted by the
constant ‘b’.
Goal of perceptron

the goal of perceptron algorithm is to minimize


the error in prediction.
Limitations of single layer perceptron

A single-layer perceptron (SLP) is limited to


solving only linearly separable problems and
cannot handle complex, non-linear relationships.
It lacks hidden layers, which restricts its ability to
model intricate patterns and make accurate
predictions for more sophisticated tasks.
As a result, SLPs have limited expressiveness and
generalization capabilities.
Multilayer Neural Networks

Multilayer neural networks contain more than


one computational layer.
The perceptron contains an input and output
layer, of which the output layer is the only
computation performing layer.
The input layer transmits the data to the output
layer, and all computations are completely
visible to the user.
Multilayer Neural Networks

Multilayer neural networks contain multiple


computational layers; the additional
intermediate layers (between input and output)
are referred to as hidden layers because the
computations performed are not visible to the
user.
Multilayer Neural Networks
The specific architecture of multilayer neural
networks is referred to as feed-forward networks,
because successive layers feed into one
another in the forward direction from input to
output.

The default architecture of feed-forward


networks assumes that all nodes in one layer are
connected to those of the next layer.
Multilayer Neural Networks
the number of units in each layer is referred to as
the dimensionality of that layer.
Multilayer Neural Networks
The basic architecture of a feed-forward
network with two hidden layers and a single
output layer.
Multilayer Neural Networks
The basic architecture of a feed-forward
network with two hidden layers and a single
output layer.
Multilayer Neural Networks

Neural networks are fundamentally more


powerful than their building blocks because the
parameters of these models are learned jointly
to create a highly optimized composition
function of these models.
Multilayer Neural Networks
At its most basic level, a neural network is a
computational graph that performs
compositions of simpler functions to provide a
more complex function.

Much of the power of deep learning arises from


the fact that repeated composition of multiple
nonlinear functions has significant expressive
power.
Multilayer Neural Networks

Much of the power of deep learning arises from


the fact that the repeated composition of
certain types of functions increases the
representation power of the network, and
therefore reduces the parameter space
required for learning.
Thank You!

You might also like