0% found this document useful (0 votes)
7 views14 pages

AI Unit 3

Uploaded by

owswot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views14 pages

AI Unit 3

Uploaded by

owswot
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit 3

3.1

What is a neural network?


Neural networks are computational models inspired by the structure and
functioning of the human brain. They are used in machine learning and artificial
intelligence for various tasks, including classification, regression, and pattern
recognition. Here are some basic concepts related to neural networks:
1. Neuron (or Node): Neurons are the fundamental building blocks of a
neural network. They receive input, perform a computation on it, and
produce an output. In artificial neural networks, neurons are often
represented as mathematical functions.

2. Weights and Biases: Each connection between neurons has an associated


weight and bias. The weights determine the strength of the connection,
and the bias allows for adjustments to the output. Learning involves
adjusting these weights and biases to minimize errors.

3. Activation Function: Activation functions introduce non-linearity into the


neural network. They decide whether a neuron should be activated or not
based on the weighted sum of its inputs. Common activation functions
include sigmoid, ReLU (Rectified Linear Unit), and tanh (hyperbolic
tangent).

4. Layers: Neurons are organized into layers in a neural network. There are
typically three types of layers:
i. Input Layer: This layer receives the initial data or features.
ii. Hidden Layer(s): These layers process the data and extract
relevant features through a series of computations.
iii. Output Layer: This layer produces the final result or prediction.

5. Feedforward: In a neural network, data flows in one direction, from the


input layer through the hidden layers to the output layer. This process is
called feedforward propagation.
6. Backpropagation: Backpropagation is the learning algorithm used to train
neural networks. It involves computing the error between the predicted
output and the actual target, then propagating this error backward through
the network to adjust the weights and biases.

four of the important applications of neural networks below.


Simple neural network architecture
A basic neural network has interconnected artificial neurons in three layers:
1. Input Layer
Information from the outside world enters the artificial neural network
from the input layer. Input nodes process the data, analyze or categorize
it, and pass it on to the next layer.

2. Hidden Layer
Hidden layers take their input from the input layer or other hidden layers.
Artificial neural networks can have a large number of hidden layers. Each
hidden layer analyzes the output from the previous layer, processes it
further, and passes it on to the next layer.

3. Output Layer
The output layer gives the final result of all the data processing by the
artificial neural network. It can have single or multiple nodes. For
instance, if we have a binary (yes/no) classification problem, the output
layer will have one output node, which will give the result as 1 or 0.
However, if we have a multi-class classification problem, the output layer
might consist of more than one output node.
Singlelayer preceptor Vs Multilayer preceptor

3.2
What is a Genetic Algorithm?
 Genetic Algorithms are a class of search and optimization algorithms that
mimic the process of natural selection to find approximate solutions to
complex problems.
 They are particularly useful for solving optimization and search problems
where traditional algorithms may struggle due to the vast search space or
non-linearity.

Difference between Traditional Algorithms and Genetic Algorithms:


 Traditional algorithms follow deterministic rules to find a solution, often
relying on mathematical or heuristic approaches.
 Genetic Algorithms are stochastic and population-based, relying on
principles of natural selection and evolution. They explore multiple
candidate solutions simultaneously and use probabilistic methods for
selection, crossover, and mutation.

Operators of Genetic Algorithms


1) Selection Operator: The idea is to give preference to the individuals with
good fitness scores and allow them to pass their genes to successive
generations.

2) Crossover Operator: This represents mating between individuals. Two


individuals are selected using selection operator and crossover sites are chosen
randomly. Then the genes at these crossover sites are exchanged thus creating a
completely new individual (offspring). For example –

3) Mutation Operator: The key idea is to insert random genes in offspring to


maintain the diversity in the population to avoid premature convergence. For
example –

Convergence of Genetic Algorithms:


1. Convergence in Genetic Algorithms refers to the process where the
population increasingly consists of solutions that are close to the optimal
or near-optimal solution.
2. Factors affecting convergence include the choice of genetic operators,
population size, mutation rate, and the nature of the problem being
solved.
3. GAs aim to strike a balance between exploration (diversity) and
exploitation (focusing on the best solutions) to converge efficiently.
The whole algorithm can be summarized as –
1) Randomly initialize populations p
2) Determine fitness of population
3) Until convergence repeat:
a) Select parents from population
b) Crossover and generate new population
c) Perform mutation on new population
d) Calculate fitness for new population

Why use Genetic Algorithms


 They are Robust
 Provide optimisation over large space state.
 Unlike traditional AI, they do not break on slight change in input or
presence of noise
 Application of Genetic Algorithms

Genetic algorithms have many applications, some of them are –


 Recurrent Neural Network
 Mutation testing
 Code breaking
 Filtering and signal processing
 Learning fuzzy rule base etc
3.3

Artificial Neural Network:


An artificial neural network is a computational framework within the field of
artificial intelligence that draws inspiration from the biological neural networks
found in the human brain.
Just as the human brain consists of interconnected neurons, artificial neural
networks are composed of nodes organized into layers.
These networks are designed to simulate the structure and functioning of the
brain, allowing them to process information, learn from data, and perform tasks
in a manner reminiscent of human cognition.

BNN Vs ANN

Working of Artificial neural network.


Basic Models of Artificial Neural Network
There exist five basic types of neuron connection architecture :
1. Single-layer feed-forward network
2. Multilayer feed-forward network
3. Single node with its own feedback
4. Single-layer recurrent network
5. Multilayer recurrent network
1. Single-layer feed-forward network

In this type of network, we have only two layers input layer and the output layer
but the input layer does not count because no computation is performed in this
layer. The output layer is formed when different weights are applied to input
nodes and the cumulative effect per node is taken. After this, the neurons
collectively give the output layer to compute the output signals.

2. Multilayer feed-forward network


This layer also has a hidden layer that is internal to the network and has no
direct contact with the external layer. The existence of one or more hidden
layers enables the network to be computationally stronger, a feed-forward
network because of information flow through the input function, and the
intermediate computations used to determine the output Z. There are no
feedback connections in which outputs of the model are fed back into itself.

3. Single node with its own feedback

When outputs can be directed back as inputs to the same layer or preceding
layer nodes, then it results in feedback networks. Recurrent networks are
feedback networks with closed loops. The above figure shows a single recurrent
network having a single neuron with feedback to itself.
4. Single-layer recurrent network
The above network is a single-layer network with a feedback connection in
which the processing element’s output can be directed back to itself or to
another processing element or both. A recurrent neural network is a class of
artificial neural networks where connections between nodes form a directed
graph along a sequence. This allows it to exhibit dynamic temporal behavior for
a time sequence. Unlike feedforward neural networks, RNNs can use their
internal state (memory) to process sequences of inputs.

5. Multilayer recurrent network


In this type of network, processing element output can be directed to the
processing element in the same layer and in the preceding layer forming a
multilayer recurrent network. They perform the same task for every element of a
sequence, with the output being dependent on the previous computations. Inputs
are not needed at each time step. The main feature of a Recurrent Neural
Network is its hidden state, which captures some information about a sequence.

You might also like