UNIT I II NOTES SOFT Computing
UNIT I II NOTES SOFT Computing
Soft computing:
*Soft computing is the reverse of hard (conventional/traditional) computing.
*Deals with approximate models and gives solutions to complex real-life problems.
*It provides cost-effective solutions to the complex real-life problems for which hard computing
solution does not exist.
such as:
Robotics Work
Data Compression
HandWriting Recognition
Image Processing and Data Compression
Automotive Systems and Manufacturing
Decision-support Systems
Power Systems
Fuzzy Logic Control
Machine Learning Applications
Speech and Vision Recognition Systems
Process Control
In effect, the role model for soft computing is the human mind. Soft computing is based on techniques
such as fuzzy logic,
genetic algorithms, artificial neural networks, machine learning, and expert systems
3. it can deal with noisy data it can only deal with exact data
These problems result from the fact that our world seems to be imprecise,
uncertain and difficult to categorize.
Neural networks:
* This have the capability of recognzing the
patterns and adapting themselves with
changing enviornments
**********************
Machine Learning :
Machine learning is a branch of artificial intelligence (AI) and computer science
which focuses on the use of data and algorithms to imitate the way that humans learn, gradually
improving its accuracy.
******************
Probabilistic Reasoning:
Probabilistic reasoning is a way of knowledge representation where we apply the concept of probability
to indicate the uncertainty in knowledge.
hybrid:
When we mix Hardcomputing with softcomputing its known as Hybrid computing
BNN(Biological Neural Network Vs ANN(Artificial
Neural Network)
Speed :
BNN:
Processes information at a slower rate response time is
Measured in milliseconds
ANN:
Information is process at faster rate the. The response time is
Measured in nanoseconds
Processing:
BNN: Massively parallel processing
ANN: Serial processing
Size & Complexity:
BNN : An Extremely intricate and dense network of linked neurons .
ANN: Size and complexity are reduced
Fault Tolerance:
BNN :The fact that information storage is flexible means that new information may be
added by
altering the connectivity strengths without deleting existing information.
ANN: Intolerant of faults. In the event of a system failure, corrupt data cannot be
recovered
Control Mechanism:
BNN:
There is no unique control mechanism outside of computational task.
ANN:
Controlling computer activity is handled by a control unit.
Aritificial Neural N/W:
An Artificial Neural Network (ANN) is an information processing paradigm
that is inspired by the brain. ANNs, like people, learn by examples. An ANN
is configured for a specific application, such as pattern recognition or data
classification, through a learning process. Learning largely involves
adjustments to the synaptic connections that exist between the neurons
ANN is an info processing paradigm- that is inspired by:
There are several different architectures for ANNs, each with their own
strengths and weaknesses. Some of the most common architectures
include:
Feedforward Neural Networks:
This is the simplest type of ANN architecture, where the information
flows in one direction from input to output. The layers are fully connected,
meaning each neuron in a layer is connected to all the neurons in the next
layer.
Recurrent Neural Networks (RNNs):
These networks have a “memory” component, where information can
flow in cycles through the network. This allows the network to process
sequences of data, such as time series or speech.
Convolutional Neural Networks (CNNs):
These networks are designed to process data with a grid-like topology,
such as images. The layers consist of convolutional layers, which learn to
detect specific features in the data, and pooling layers, which reduce the
spatial dimensions of the data.
Autoencoders:
These are neural networks that are used for unsupervised learning.
They consist of an encoder that maps the input data to a lower-dimensional
representation and a decoder that maps the representation back to the
original data.
Generative Adversarial Networks (GANs): These are neural networks that
are used for generative modeling. They consist of two parts: a generator
that learns to generate new data samples, and a discriminator that learns to
distinguish between real and generated data.
The model of an artificial neural network can be specified by three entities:
Interconnections
Activation functions
Learning rules
Interconnections:
In this type of network, we have only two layers input layer and the
output layer but the input layer does not count because no computation is
performed in this layer. The output layer is formed when different weights
are applied to input nodes and the cumulative effect per node is taken. After
this, the neurons collectively give the output layer to compute the output
signals.
2. Multilayer feed-forward network
This layer also has a hidden layer that is internal to the network and
has no direct contact with the external layer. The existence of one or more
hidden layers enables the network to be computationally stronger, a feed-
forward network because of information flow through the input function, and
the intermediate computations used to determine the output Z. There are no
feedback connections in which outputs of the model are fed back into itself.
McCulloch-Pitts Neuron
It may be divided into 2 parts. The first part, g takes an input
(ahem dendrite ahem), performs an aggregation and based on
the aggregated value the second part, f makes a decision.
Perceptron is a type of artificial neural network, which is a fundamental concept in machine learning.
The basic components of a perceptron are:
Input Layer: The input layer consists of one or more input neurons, which receive input signals
from the external world or from other layers of the neural network.
Weights: Each input neuron is associated with a weight, which represents the strength of the
connection between the input neuron and the output neuron.
Bias: A bias term is added to the input layer to provide the perceptron with additional flexibility
in modeling complex patterns in the input data.
Activation Function: The activation function determines the output of the perceptron based on
the weighted sum of the inputs and the bias term. Common activation functions used in perceptrons
include the step function, sigmoid function, and ReLU function.
Output: The output of the perceptron is a single binary value, either 0 or 1, which indicates the
class or category to which the input data belongs.
Training Algorithm: The perceptron is typically trained using a supervised learning algorithm
such as the perceptron learning algorithm or backpropagation. During training, the weights and
biases of the perceptron are adjusted to minimize the error between the predicted output and the true
output for a given set of training examples.
Overall, the perceptron is a simple yet powerful algorithm that can be used to perform binary
classification tasks and has paved the way for more complex neural networks used in deep learning
today.
Types of Perceptron:
6. Single layer: Single layer perceptron can learn only linearly separable patterns.
7. Multilayer: Multilayer perceptrons can learn about two or more layers having a greater
processing power.
The Perceptron algorithm learns the weights for the input signals in order to draw a linear decision
boundary.
Adaline (Adaptive Linear Neural) :
A network with a single linear unit is called Adaline (Adaptive Linear Neural). A
unit with a linear activation function is called a linear unit. In Adaline, there is only
one output unit and output values are bipolar (+1,-1). Weights between the input
unit and output unit are adjustable. It uses the delta rule i.e , where and are the
weight, predicted output, and true value respectively.
The learning rule is found to minimize the mean square error between
activation and target values. Adaline consists of trainable weights, it compares
actual output with calculated output, and based on error training algorithm is
applied.
First, calculate the net input to your Adaline network then apply the activation
function to its output then compare it with the original output if both the equal, then
give the output else send an error back to the network and update the weight
according to the error which is calculated by the delta learning rule. i.e ,
where and are the weight, predicted output, and true value respectively.
Architecture:
In Adaline, all the input neuron is directly connected to the output neuron with
the weighted connected path. There is a bias b of activation function 1 is present.
Madaline (Multiple Adaptive Linear Neuron) :
There are three types of a layer present in Madaline First input layer contains all the
input neurons, the Second hidden layer consists of an adaline layer, and weights
between the input and hidden layers are adjustable and the third layer is the output
layer the weights between hidden and output layer is fixed they are not adjustable.
UNIT II
How Brain works, Neurons as simple computing
Perceptron is a type of artificial neural network, which is a fundamental concept in machine learning.
The basic components of a perceptron are:
Input Layer: The input layer consists of one or more input neurons, which receive input signals
from the external world or from other layers of the neural network.
Weights: Each input neuron is associated with a weight, which represents the strength of the
connection between the input neuron and the output neuron.
Bias: A bias term is added to the input layer to provide the perceptron with additional flexibility
in modeling complex patterns in the input data.
Activation Function: The activation function determines the output of the perceptron based on
the weighted sum of the inputs and the bias term. Common activation functions used in perceptrons
include the step function, sigmoid function, and ReLU function.
Output: The output of the perceptron is a single binary value, either 0 or 1, which indicates the
class or category to which the input data belongs.
Training Algorithm: The perceptron is typically trained using a supervised learning algorithm
such as the perceptron learning algorithm or backpropagation. During training, the weights and
biases of the perceptron are adjusted to minimize the error between the predicted output and the true
output for a given set of training examples.
Overall, the perceptron is a simple yet powerful algorithm that can be used to perform binary
classification tasks and has paved the way for more complex neural networks used in deep learning
today.
Step-1
In the first step first, multiply all input values with corresponding weight values
and then add them to determine the weighted sum. Mathematically, we can
calculate the weighted sum as follows:
Add a special term called bias 'b' to this weighted sum to improve the model's
performance.
∑wi*xi + b
Step-2
Types of Perceptron:
8. Single layer: Single layer perceptron can learn only linearly separable patterns.
9. Multilayer: Multilayer perceptrons can learn about two or more layers having a greater
processing power.
In a single layer perceptron model, its algorithms do not contain recorded data, so
it begins with inconstantly allocated input for weight parameters. Further, it sums
up all inputs (weight). After adding all inputs, if the total sum of all inputs is more
than a pre-determined value, the model gets activated and shows the output
value as +1.
Like a single-layer perceptron model, a multi-layer perceptron model also has the
same model structure but has a greater number of hidden layers.
o Forward Stage: Activation functions start from the input layer in the forward
stage and terminate on the output layer.
o Backward Stage: In the backward stage, weight and bias values are
modified as per the model's requirement. In this stage, the error between
actual output and demanded originated backward on the output layer and
ended on the input layer.
A multi-layer perceptron model has greater processing power and can process
linear and non-linear patterns. Further, it can also implement logic gates such as
AND, OR, XOR, NAND, NOT, XNOR, NOR.
o It helps to obtain the same accuracy ratio with large as well as small data.
Perceptron Function
Perceptron function ''f(x)'' can be achieved as output by multiplying the input 'x'
with the learned weight coefficient 'w'.
f(x)=1; if w.x+b>0
otherwise, f(x)=0
Characteristics of Perceptron
The perceptron model has the following characteristics.
3. Initially, weights are multiplied with input features, and the decision is made
whether the neuron is fired or not.
4. The activation function applies a step rule to check whether the weight
function is greater than zero.
5. The linear decision boundary is drawn, enabling the distinction between the
two linearly separable classes +1 and -1.
6. If the added sum of all input values is more than the threshold value, it must
have an output signal; otherwise, no output will be shown.
Back Propagation Neural Networks
Back Propagation Neural (BPN) is a multilayer neural network consisting of the input
layer, at least one hidden layer and output layer. As its name suggests, back
propagating will take place in this network. The error which is calculated at the output
layer, by comparing the target output and the actual output, will be propagated back
towards the input layer.
Architecture
As shown in the diagram, the architecture of BPN has three interconnected layers
having weights on them. The hidden layer as well as the output layer also has bias,
whose weight is always 1, on them. As is clear from the diagram, the working of
BPN is in two phases. One phase sends the signal from the input layer to the output
layer, and the other phase back propagates the error from the output layer to the
input layer.
Training Algorithm
For training, BPN will use binary sigmoid activation function. The
training of BPN will have the following three phases.
Phase 1 − Feed Forward Phase
Phase 2 − Back Propagation of error
Phase 3 − Updating of weights
All these steps will be concluded in the algorithm as follows
Step 5 − Calculate the net input at the hidden unit using the
following relation −
Here b0j is the bias on hidden unit, vij is the weight on j unit of the
hidden layer coming from i unit of the input layer.
Step 6 − Calculate the net input at the output layer unit using the
following relation −
Here b0k is the bias on output unit, wjk is the weight on k unit of
the output layer coming from j unit of the hidden layer.
Phase 2
Step 7 − Compute the error correcting term, in correspondence with
the target pattern received at each output unit, as follows −
Phase 3
Step 9 − Each output unit (ykk = 1 to m) updates the weight and
bias as follows −
Introduction
During your daily schedule, various waves of networks propagate
through your brain. The entire day is formed from the dynamic behaviour
of these networks. Neural Networks are artificial intelligence methods
that teach computers to process the data like our brains. These networks
use interconnected nodes that resemble neurons of the human brain.
This method enables computers to make intelligent decisions by learning
from their mistakes. The Hopfield network is one such type of recurrent
neural network method.