0% found this document useful (0 votes)
17 views45 pages

Unit - 5 - Neural Pattern Recognition

Unit IV focuses on Neural Pattern Recognition, detailing the structure and function of artificial neural networks (ANNs) and their applications in pattern recognition. It covers various types of neural networks, including perceptrons, multilayer perceptrons, convolutional neural networks, and recurrent neural networks, emphasizing their training processes and advantages. The document also discusses physical neural networks and neural pattern associators, highlighting their role in memory retrieval and pattern mapping.

Uploaded by

rdoshi18113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views45 pages

Unit - 5 - Neural Pattern Recognition

Unit IV focuses on Neural Pattern Recognition, detailing the structure and function of artificial neural networks (ANNs) and their applications in pattern recognition. It covers various types of neural networks, including perceptrons, multilayer perceptrons, convolutional neural networks, and recurrent neural networks, emphasizing their training processes and advantages. The document also discusses physical neural networks and neural pattern associators, highlighting their role in memory retrieval and pattern mapping.

Uploaded by

rdoshi18113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Unit IV Neural Pattern Recognition

Pattern Recognition
Unit -III Graphical Approaches & Grammatical Inference in
Syntactic Pattern Recognition

Introduction to Neural Networks: Neurons and Neural Nets, Neural Network


Structures for PR Applications, Physical Neural Networks, The Artificial Neural
Network Model

Introduction to Neural Pattern Associators and Matrix Approaches: Neural


Network Based Pattern Associators, Matrix Approaches (Linear Associative
Mappings) and Examples
Introduction to Neural Networks

Neurons and Neural Nets


Biological Neuron: A basic unit of the
nervous system, responsible for
transmitting and processing information.
•Dendrites: Receive signals from other
neurons.
•Soma: Cell body, processes incoming
signals.
•Axon: Sends signals to other neurons.
•Synapses: Junctions where neurons
connect.
Artificial neural networks (ANNs)
In Artificial neural networks (ANNs),
neurons are modeled as computational
units. Each neuron receives inputs,
processes them using a weighted sum, and
produces an output after applying an
activation function.

Artificial Neuron: A simplified mathematical


model of a biological neuron.
•Inputs: Receive numerical values.
•Weights: Determine the importance of each
input.
•Activation function: Combines inputs and
weights to produce an output.
The neural networks as a "black box"
approach.
1.Neural Networks as a Black Box Approach:
The term "black box" suggests that the inner workings of the neural network
aren’t explicitly designed or understood in a traditional programming sense.
Instead, the network is designed to learn from data without the need for
explicit, algorithmic instructions.
This approach is "non-algorithmic," meaning it doesn’t rely on a predefined set
of rules to generate outputs. Instead, the network learns by adjusting its
internal structure through training with data.
2.Training the Neural Network:
The neural network is "trained" to produce correct outputs (such as
classifications) for each input sample. This training process involves adjusting
the weights within the network to minimize error.
The goal of training is for the network to "self-organize," or configure itself, to
generalize effectively. This means it should be able to handle new, unseen data
(similar to its training data) accurately by learning from previous patterns.
Advantage of the Black Box Approach:
This strategy is advantageous for designers of pattern recognition (PR) systems because it
requires minimal prior knowledge about the specific details of how to perform the task. Instead,
the focus is on providing quality training data and allowing the network to learn the mapping
between inputs and outputs.
Neural Networks for Pattern Recognition
(PR)
A neural network consists of multiple neurons
interconnected in layers. Each layer processes inputs
from the previous layer and passes outputs to the
next layer.

Input Layer: Receives the input features.

Hidden Layer: Processes the inputs using weights


and activation functions.

Output Layer: Produces the final classification or


result.
Neural Network Structures for PR
Applications
•Layered Structure: Neural networks for PR typically consist of input layers,
hidden layers, and output layers.
•The input layer receives raw data (such as image pixels or audio signals). Hidden
layers perform transformations, and the output layer provides the final
classification or recognition result.

Fully Connected Networks


In fully connected networks, each neuron in one layer is connected to every
neuron in the next layer. This dense interconnection allows the network to
capture complex relationships between input features.
Neural Network Structures for PR
Applications
1. Perceptron: A single-layer neural network used for binary classification.
2. Multilayer Perceptron (MLP): A neural network with one or more
hidden layers, capable of learning complex patterns.
3. Convolutional Neural Network (CNN): Designed for image and video
processing, using convolutional layers to extract features.
4. Recurrent Neural Network (RNN): Used for sequential data, such as text
and time series, with feedback loops.
5. Long Short-Term Memory (LSTM): A type of RNN that can learn long-
term dependencies.
1.Perceptron
Perceptron
•Definition: A single-layer neural network
that can learn a linear decision boundary.
•Structure:
• Input layer: Receives input features.
• Output layer: Produces a binary
output (1 or 0).
•Activation function: Typically a step
function.
•Limitations: Can only classify linearly
separable data.
2.Multilayer Perceptron (MLP)

•Definition: A neural network with one


or more hidden layers, capable of
learning complex nonlinear patterns.
•Structure:
•Input layer: Receives input features.
•Hidden layers: Process and transform
input features.
•Output layer: Produces the final
output.
•Activation function: Typically a
sigmoid or ReLU function.
•Training: Backpropagation algorithm is
used to adjust weights.
3.Convolutional Neural Network (CNN)
•Definition: A neural network specifically designed for image and video
processing.
•Structure:
•Convolutional layers: Apply filters to extract local features.
•Pooling layers: Downsize the feature maps.
•Fully connected layers: Classify the extracted features.
•Key features:
•Weight sharing: Reduces the number of parameters.
•Local connectivity: Captures local spatial information.
•Applications: Image classification, object detection, image segmentation.
4.Recurrent Neural Network (RNN)
•Recurrent Neural Network(RNN) is a type of Neural network where the output
from the previous step is fed as input to the current step. In traditional neural
networks, all the inputs and outputs are independent of each other.

•Definition: A neural network that can process sequential data.


•Structure:
•Recurrent connections: Allow the network to maintain a memory of previous
inputs.
•Types: Simple RNN, LSTM, GRU.
•Applications: Natural language processing, time series analysis, speech
recognition.
Recurrent Neural Network (RNN)
Physical Neural Networks:
• Physical Neural Networks are a class of neural networks that are designed with
hardware mimicking the brain's physical and functional structures. Unlike
traditional artificial neural networks, which are typically implemented in software,
physical neural networks (PNNs) use hardware elements to represent neurons
and synaptic connections directly.

•While most neural networks are implemented in software, physical neural


networks involve hardware-based realizations. These can be implemented using
electronic circuits and components designed to mimic the synaptic and neuronal
connections in biological brains.
Biologically Inspired Design:
•Just like artificial neurons in software-based neural networks are inspired by
biological neurons, physical neural networks are often designed to replicate the
actual biological processes happening in the brain.
•These networks implement the neuron’s activity through physical processes,
such as electrical, optical, or even mechanical means.

Physical neural networks can be built using electronic circuits, optical devices,
or quantum systems to mimic the connections and computations in a neural
network.

They use resistive elements to simulate synaptic weights and capacitive


elements for storing values or performing integration.
Structure of Physical Neural Networks
1.Neurons (Nodes):
In PNNs, neurons are represented by hardware elements that can emulate the basic
functions of biological neurons, such as integrating inputs, generating an activation,
and propagating this output to connected neurons. The neurons can be composed of
materials like memristors or other nano-scale elements.
Neurons in PNNs can process signals with minimal power and high efficiency due to
the direct, physical implementation.
2.Synapses (Connections):
Synaptic connections are often realized with elements that can change their
conductive states, such as resistors, memristors, or other materials capable of
holding variable resistances. These materials represent the “weight” of the
connections between neurons.
Synaptic weights in PNNs can be physically adjusted to increase or decrease the
strength of the connections, similar to how synaptic plasticity works in the human
brain.
Learning Mechanism:
In PNNs, learning occurs as the hardware elements adapt to changes in
electrical signals, modifying the synaptic weights.
Some PNNs utilize physical properties, such as electrical charge distribution, to
adjust connections in real-time, allowing the network to adapt through a
process akin to training in traditional neural networks.
Processing Layers:
Similar to artificial neural networks, PNNs are organized in layers, with each
layer containing a certain number of neurons that are interconnected.
The layers typically include an input layer, one or more hidden layers, and an
output layer. Each layer processes data by passing signals through hardware-
based neurons and synapses to reach the final output.
Structure of Physical Neural Networks
The Artificial Neural Network Model
Artificial Neuron Activation and Output Characteristics
Individual Neuron Activation: This refers to the "firing" or activation state
of a single neuron. It can be represented by various functions, such as a
simple threshold function or more complex nonlinear functions.
Input Combination and Weighting: Neurons receive multiple inputs, each
multiplied by a weight. These weighted inputs are then summed together.
Activation Function: The summed input is passed through an activation
function, which determines the final output of the neuron. This function
introduces nonlinearity into the network, enabling it to learn complex
patterns.
Possible Dynamics: The neuron may incorporate additional dynamics, such
as delays or temporal smoothing, to model more complex behaviors.
Abstract Neuron I/O Characteristic
Inputs: The neuron receives multiple inputs from other neurons or external sources.
Input Combination/Weighting: The inputs are multiplied by their respective weights and
summed together.
Activation/Output Conversion: The weighted sum is passed through an activation function
to produce the neuron's output.
Possible Dynamics: The neuron may include additional dynamics, such as delays or
temporal smoothing.
Specific Artificial Neuron Computational Structure (No Unit Dynamics)
Inputs (x1, x2, ..., xd): The neuron receives multiple input signals.

Weights (w1, w2, ..., wd): Each input is multiplied by a corresponding weight, which determines its
contribution to the neuron's output.

Weighted Sum (net): The weighted inputs are summed together to form the net input.

Activation Function (f): The net input is passed through an activation function, which introduces
nonlinearity and determines the final output of the neuron.

Output (o): The output of the neuron is the result of the activation function applied to the net input.
a) McCulloch-Pitts Model
•The neuron receives multiple inputs, each with an associated weight.
•The weighted inputs are summed together.
•If the sum exceeds a threshold value (T), the neuron fires and produces an
output of 1. Otherwise, the output is 0.

Characteristics:
•The neuron operates in a binary manner,
producing either a 0 or a 1.
•It is a simple model that can be used to
understand the basic principles of neural
networks.
a) McCulloch-Pitts Model
•The neuron receives multiple inputs, each with an associated weight.
•The weighted inputs are summed together.
•If the sum exceeds a threshold value (T), the neuron fires and produces an
output of 1. Otherwise, the output is 0.

Characteristics:
•The neuron operates in a binary manner,
producing either a 0 or a 1.
•It is a simple model that can be used to
understand the basic principles of neural
networks.
b) Linear Weighted Threshold Model
•Similar to the McCulloch-Pitts model, the neuron receives multiple inputs with
associated weights.
•The weighted inputs are summed together.
•If the sum exceeds a threshold value (T), the neuron produces an output of 1.
Otherwise, the output is 0.

Characteristics:
•The neuron operates in a binary manner,
producing either a 0 or a 1.
•It is a more general model than the
McCulloch-Pitts model, as it allows for
both positive and negative weights.
The key difference between the McCulloch-Pitts Model and
the Linear Weighted Threshold Model
Output or activation functions in artificial neurons
(a) Linear Output Function with Adjustable
Gain
This figure represents a linear output function
where the output oi increases proportionally to
the input net i​.
•The slope of each line corresponds to different
levels of gain
When gain = 1, the neuron has a direct, one-to-
one response to input.
•When gain > 1, the slope is steep, making the
neuron more sensitive to changes in input.
•When gain < 1, the neuron responds less to
input, resulting in a flatter slope.
(b) Threshold (Relay) Output Function
This is a threshold or relay function where the neuron only
activates if the input exceeds a certain threshold T.

If net i ​<T, the output o i is zero.

If net i ≥T, the output o i jumps to a constant value.

This function is binary, meaning it has only two output states (on or
off). It’s commonly used in binary classification tasks where the
neuron either "fires" or "doesn't fire" based on a threshold.
(c) Linear Threshold Function
This function introduces two thresholds, a lower threshold t l
and an upper threshold tu.

If net i < t l​, the output o i is zero.

If net i​ is between t l​ and t u​, the output increases linearly with the
input.

If net i >tu ​, the output reaches a maximum and stays


constant.

This type of function is useful for neurons that need to respond


proportionally to input within a certain range but should be capped
at a maximum.
(d) Sigmoid Function (General)
This is a sigmoid function, a common type of non-linear function used in
neural networks.

The sigmoid curve is S-shaped, with values ranging between 0 and 1.

This function smoothly transitions between low and high output values as
net i increases, allowing neurons to "fire" gradually rather than suddenly.

Sigmoid functions are useful in neural networks because they introduce


non-linearity, allowing the network to approximate complex functions and
make smooth predictions.
(e) Sigmoid Function with Adjustable Gain
λ
This figure shows sigmoid functions with different values
of gain λ

For λ→ ∞, the function approaches a step-like behavior


similar to a threshold function, where the output sharply
transitions from 0 to 1.

When λ=1, it has a standard S-shape.

For λ<1 , the curve becomes more gradual, and the


neuron is less sensitive to changes in input.

When λ=0 , the output remains at 0.5 regardless of input.

By adjusting λ\, the steepness of the sigmoid curve can


be controlled, allowing for flexible tuning of how strongly
the neuron responds to changes in input.
Neural Network Based Pattern Associators
•Neural Pattern Associators are types of neural networks that specialize in
associating patterns or mappings between input patterns and output patterns.
•They play an important role in pattern recognition and memory-based retrieval
systems, where the network is trained to recognize and recall specific patterns
or associations.
Pattern Association: The task where an input pattern (like an image or signal) is
mapped to a corresponding output pattern (like a label or transformed image).
•Associative Memory: Neural networks can act as associative memories, storing
a set of patterns so that when an incomplete or noisy version of a pattern is
presented, the network can retrieve the original or closest stored pattern.

•Learning and Generalization: Pattern associators learn by adjusting weights to


create a specific mapping from input to output patterns. Once trained, they can
generalize to respond to similar input patterns.
Types of Neural Pattern Associators
1.Auto-associative Networks:

These networks map an input pattern to itself or a close approximation, helping


in tasks like noise reduction.

Autocorrelator (Auto associative) Structures: These store patterns so that


when given an input similar to a stored pattern, they output the stored pattern
itself (used for pattern recollection and completion).
Example: Autoencoders in deep learning are a type of auto-associative network.
2. Hetero-associative Networks:

These networks map an input pattern to a different output pattern.


They are often used in tasks where an input needs to be recognized and
transformed into a distinct output format.

Hetero correlator (Hetero associative) Structures: These store pairs of patterns,


meaning that for a given input pattern xk, they produce an associated output yk​.
This approach is useful when a pattern class or category needs to be recalled.
CAM and Other Neural Memory
Structures
Content Addressable Memory (CAM), also known as Associative Memory, is a
type of memory structure in neural networks that enables direct retrieval of
stored information based on the content rather than by a specific address

Unlike traditional memory systems that retrieve data by referring to a location


or address, CAM retrieves data by matching input patterns. This characteristic
makes CAM particularly valuable for Pattern Associators (PAs) in neural
networks, where the goal is to associate input patterns with specific responses
or outputs.
CAM and Other Neural Memory
Structures
The PA structures shown in the image
demonstrate different configurations

•Single output (a simple output range)

•Coded output (indicating classification or


description)

•CAM associator (where the output


pattern is 'close' to the input in similarity)
CAM and Other Neural Memory
Structures
This figure illustrates different ways to represent the output of a neural
network:
1.Single Output, Spans a Range of Values: The network produces a single
output value that can represent a continuous range of values, such as a
probability or a numerical quantity.
2.1-of-C Selector: The network produces a binary output vector with only one
active element. The position of the active element indicates the selected class
or category.
3.Coded: The network produces a coded output, where the code represents a
class or description.
4.CAM Associator: The network produces an output that is a pattern similar to
the input pattern. This is useful for pattern completion and association tasks.
Desirable Pattern Associator (PA) Properties

Desirable Pattern Associator (PA) Properties


The properties listed here describe characteristics that make a good Pattern
Associator:
1.Association Capability: It should associate several stimulus-response pairs.
2.Correct Response: It should correctly respond by discriminating between
stimuli.
3.Trainability and Self-Organization: It should allow for supervised learning and
be able to naturally cluster data.
4.Handling Incomplete Patterns: It should correctly generate outputs even when
input patterns are partially missing or distorted.
Matrix Approaches (Linear Associative Mappings) and
Examples
Matrix approaches in neural networks use linear algebra to perform pattern
association. Linear associative mappings use mathematical techniques to design
Content Addressable Memory (CAM) structures that can be auto- or hetero-
associative.
An Elementary Linear Network Structure:
Single-Layer Network: A simple neural network with a single layer of neurons.
•Linear Activation Function: The activation function of each neuron is linear,
meaning the output is directly proportional to the weighted sum of inputs.
•Mathematical Representation: The output of the ith neuron (oi) is calculated
as the dot product of the weight vector (wi) and the input vector (xi).
2. Matrix Formulation for Multiple Layers:
•Two-Layer Network: The output of the first layer becomes the input to the second layer.
•Overall Mapping: The overall mapping from the input to the final output can be expressed as
the product of the weight matrices of the two layers.
•Matrix Dimensions: The matrices W1 and W2 do not necessarily have the same dimensions.

3. Recurrent Neural Networks:


•Interconnected Units: Each unit is connected to every other unit, including itself, forming a
recurrent structure.
•Time-Dependent Behavior: The output of a unit at time step k+1 depends on its own output and
the outputs of other units at time step k.
•Matrix Representation: The dynamics of the network can be represented as a discrete-time
difference equation, where the weight matrix W captures the connections between units.
a): Single Linear Unit Characteristic
•Inputs (i1, i2, ..., id): These are the input
signals to the neuron.
•Weights (w1, w2, ..., wd): Each input is
multiplied by a corresponding weight.
•Weighted Sum: The weighted inputs are
summed together.
•Activation Function: The weighted sum is
passed through an activation function (in
this case, a linear function) to produce the
output.
•Output (o1): The output of the neuron.
(b): c-Unit Network
•Multiple Inputs and Outputs: This
network has multiple input units and
multiple output units.
•Weight Matrix (W): The connections
between the input and output units are
represented by the weight matrix W.
•Matrix Multiplication: The output vector o
is obtained by multiplying the input vector i
by the weight matrix W.

You might also like