0% found this document useful (0 votes)
18 views118 pages

Chapter-1 Intorduction To Neural Networks (Autosaved)

This document provides an introduction to neural networks, detailing their structure, function, and types of learning, including supervised, unsupervised, and reinforcement learning. It discusses the basic components of neural networks, such as neurons, synapses, and activation functions, and highlights the historical contributions of key figures in the field. Additionally, it includes practical assignments for programming neural networks and explores concepts like the perceptron and Bayes' theorem.

Uploaded by

ZoHaib RajpOot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views118 pages

Chapter-1 Intorduction To Neural Networks (Autosaved)

This document provides an introduction to neural networks, detailing their structure, function, and types of learning, including supervised, unsupervised, and reinforcement learning. It discusses the basic components of neural networks, such as neurons, synapses, and activation functions, and highlights the historical contributions of key figures in the field. Additionally, it includes practical assignments for programming neural networks and explores concepts like the perceptron and Bayes' theorem.

Uploaded by

ZoHaib RajpOot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 118

Chapter-1

Introduction to Neural Networks


by

Dr. Muhammad Ismail Khan

Department of Computer Engineering,


COMSATS University Islamabad, Attock Campus

1
WHAT IS A NEURAL
NETWORK?

A neural network is a massively parallel distributed processor


made up of simple processing units that has a natural
propensity for storing experiential knowledge and making it
available for use. It resembles the brain in two respects:
1. Knowledge is acquired by the network from its environment
through a learning process.
2. Interneuron connection strengths, known as synaptic
weights, are used to store the acquired knowledge.
THE HUMAN BRAIN
THE NEURON
NEURON
COMMUNICATION
NEURON
COMMUNICATION
Synapses, or nerve endings, are elementary
structural and functional units that mediate the
interactions between neurons.
A presynaptic neuron transmits the signal toward a synapse,
whereas a postsynaptic neuron transmits the signal away from the
synapse. The transmission of information from one neuron to
another takes place at the synapse, a junction where the terminal
part of the axon contacts another neuron.
THRESHOLD FOR
NEURON FIRING
NEURON MODEL
NEURON MODEL
EFFECT OF BIASING
NEURON MODEL
TYPES OF ACTIVATION
FUNCTIONS
TYPES OF ACTIVATION
FUNCTIONS
Rule 1. A signal flows along a link only in the direction defined by the arrow on the link.
Rule 2. A node signal equals the algebraic sum of all signals entering the pertinent node
via the incoming links.
Rule 3. The signal at a node is transmitted to each outgoing link originating from that
node, with the transmission being entirely independent of the transfer functions of the
outgoing links.
.
A neural network is a directed graph consisting of nodes with interconnecting synaptic and activation
links and is characterized by four properties:
1. Each neuron is represented by a set of linear synaptic links, an externally applied bias, and a
possibly nonlinear activation link.The bias is represented by a synaptic link connected to an input
fixed at 1. 2. The synaptic links of a neuron weight their respective input signals.
3. The weighted sum of the input signals defines the induced local field of the neuron in question. 4.
The activation link squashes the induced local field of the neuron to produce an output.
FEEDBACK
A recurrent neural network
distinguishes itself from a
feedforward neural network in that
it has at least one feedback loop.
For example, a recurrent network
may consist of a single layer of
neurons with each neuron feeding
its output signal back to the inputs
of all the other neurons.In the
structure depicted in this figure,
there are no self-feedback loops in
the network; self-feedback refers
to a situation where the output of a
• When an object of interest rotates, the image of the object as perceived
by an observer usually changes in a corresponding way.
• In a coherent radar that provides amplitude as well as phase information
about its
surrounding environment, the echo from a moving target is shifted in
frequency,
due to the Doppler effect that arises from the radial motion of the target in
relation to the radar.
• The utterance from a person may be spoken in a soft or loud voice, and in
a slow
or quick manner.
In order to build an object-recognition system, a radar target-recognition
system, and a speech-recognition system for dealing with these
phenomena, respectively, the system must be capable of coping with a
range of transformations of the observed signal. Accordingly, a primary
requirement of pattern recognition is to design a classifier that is invariant
TYPES OF LEARNING

(i) supervised learning, which requires the availability of a target or desired


response
for the realization of a specific input–output mapping by minimizing a cost
function of interest;
(ii) unsupervised learning, the implementation of which relies on the
provision of a
task-independent measure of the quality of representation that the network
is required to learn in a self-organized manner;
(iii) reinforcement learning, in which input–output mapping is performed
through the
continued interaction of a learning system with its environment so as to
minimize a scalar index of performance.
Supervised learning relies on the availability of a training sample of labeled
examples, with each example consisting of an input signal (stimulus) and
the corresponding desired (target) response. In practice, we find that the
• In light of these realities, there is a great deal of interest in another
category of learning: semisupervised learning, which employs a training
sample that consists of labeled as well as unlabeled examples.The
challenge in semisupervised learning, discussed in a subsequent chapter,
is to design a learning system that scales reasonably well for its
implementation to be practically feasible when dealing with large-scale
patternclassification problems.

• Reinforcement learning lies between supervised learning and


unsupervised learning. It operates through continuing interactions
between a learning system (agent) and the environment.The learning
system performs an action and learns from
the response of the environment to that action. In effect, the role of the
teacher in supervised
learning is replaced by a critic, for example, that is integrated into the
learning machinery.
Some questions for
Assignment-1
Q-1: Write a Python program for a single neuron with 4 inputs and a
threshold activation function.
Provide explicit weights and bias to the neuron and see its output.

Q-2: Write a Python program for a single layer neural network which has 4
inputs and 3 neurons. Each input is connected to every output node.
Provide explicit weights and
biases and print the output when the activation function is
i. Threshold
ii. Sigmoid activation function.
Rosenblatt’s Perceptron
In the formative years of neural networks (1943–1958), several researchers stand out for
their pioneering contributions:
• McCulloch and Pitts (1943) for introducing the idea of neural networks as computing
machines.
• Hebb (1949) for postulating the first rule for self-organized learning. • Rosenblatt (1958)
for proposing the perceptron as the first model for learning with a teacher (i.e., supervised
learning).
The perceptron is the simplest form of a neural network used for the classification of
patterns said to be linearly separable (i.e., patterns that lie on opposite sides of a
hyperplane). Basically, it consists of a single neuron with adjustable synaptic weights and
bias. The algorithm used to adjust the free parameters of this neural network first appeared
in a learning procedure developed by Rosenblatt (1958, 1962) for his perceptron brain
model.1
Indeed, Rosenblatt proved that if the patterns (vectors) used to train the perceptron are
drawn from two linearly separable classes, then the perceptron algorithm converges and
positions the decision surface in the form of a hyperplane between the two classes. The
proof of convergence of the algorithm is known as the perceptron convergence theorem.
The perceptron built around a single neuron is limited to performing pattern classification
with only two classes (hypotheses). By expanding the output (computation) layer of the
Rosenblatt’s perceptron is built around a nonlinear neuron, namely, the McCulloch–Pitts model
of a neuron.
Bayes’ Theorem
Naïve Bayes classifier
Sample variance vs
Population variance
Biased and unbiased
estimators
Covariance
Correlation coefficient
Correlation matrix
Correlation matrix
Gaussian (Normal)
distribution
Multivariate Gaussian
(Normal) distribution
Further Assignment-1
Questions

You might also like