Dsa Theory Da
Dsa Theory Da
Dsa Theory Da
AND
ALGORITHMS
SWASTIK RAJ
22BCE0411
SLOT :- C2+TC2
FACULTY:- DR.
PARVEEN
SULTANA H
UNVIELIN
G THE
POWER OF
NEURAL
NETWORKS
IN THE
REAL
WORLD
OBJECTIVE
INTRODUCTION TO NEURAL
NETWORKS
DEFINITION AND OVERVIEW
• A neural network is a series of algorithms that tries to recognize underlying
relationships in a set of data through a process that mimics the way the
human brain operates.
• In this sense, neural networks refer to systems of neurons, either organic or
artificial in nature.
• The concept of neural networks, which has its roots in artificial intelligence,
is swiftly gaining popularity in the development of trading system.
• Neural networks, also known as artificial neural
networks (ANNs) or simulated neural networks
(SNNs), are a subset of M and are at the heart of
deep learning algorithms.
• Their name and structure are inspired by the
human brain, mimicking the way that biological
neurons signal to one another.
HISTORY:-
• The simplest kind of feedforward neural network (FNN) is a
linear network, which consists of a single layer of output nodes;
the inputs are fed directly to the outputs via a series of weights.
• The sum of the products of the weights and the inputs is
calculated in each node.
• The mean squared errors between these calculated outputs and a
given target values are minimized by creating an adjustment to
the weights.
• This technique has been known for over two centuries as
the method of least squares or linear regression.
• It was used as a means of finding a good rough linear fit to a set
of points by Legendre (1805) and Gauss (1795) for the
prediction of planetary movement.
• Though the concept of integrated machines that can think has existed for centuries, there
have been the largest strides in neural networks in the past 100 years.
• In 1943, Warren McCulloch and Walter Pitts from the University of Illinois and the
University of Chicago published "A Logical Calculus of the Ideas Immanent in Nervous
Activity".
• The research analyzed how the brain could produce complex patterns and could be
simplified down to a binary logic structure with only true/false connections.
• Frank Rosenblatt from the Cornell Aeronautical Labratory was credited with the
development of perceptron in 1958.
• His research introduced weights to McColloch's and Pitt's work, and Rosenblatt leveraged
his work to demonstrate how a computer could use neural networks to detect imagines
and make inferences.
• Wilhelm Lenz and Ernst Ising created and analyzed the Ising
model (1925) which is essentially a non-learning artificial recurrent neural
network (RNN) consisting of neuron-like threshold elements.
• In 1972, Shun'ichi Amari made this architecture adaptive.
• His learning RNN was popularised by John Hopfield in 1982.
PART II
ARCHITECTURE &
STRUCTURE
STRUCTURE :-
NEURON
PROPOGATION FUNCTION
BIAS
IMPORTANT CONCEPT TO UNDERSTAND:-
MULTI – LAYERED PERCEPTRON
• In a multi-layered perceptron (MLP), perceptrons are arranged in interconnected layers. The
input layer collects input patterns. The output layer has classifications or output signals to
which input patterns may map. For instance, the patterns may comprise a list of quantities
for technical indicators about a security; potential outputs could be “buy,” “hold” or “sell.”
• Hidden layers fine-tune the input weightings until the neural network’s margin of error is
minimal. It is hypothesized that hidden layers extrapolate salient features in the input data
that have predictive power regarding the outputs. This describes feature extraction, which
accomplishes a utility similar to statistical techniques such as principal component analysis.
TYPES OF
NEURAL NETWORKS
ON THE BASIS OF
ARCHITECTURE
FEED FORWARD NEURAL NETWORKS
• Feed-forward neural networks are one of the more simple types of neural networks.
• It conveys information in one direction through input nodes; this information continues to be processed in
this single direction until it reaches the output mode.
• Feed-forward neural networks may have hidden layers for functionality, and this type of most often used
for facial recognition technologies.
• In this network, the information moves in only one direction—forward—from the input nodes, through
the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
RECURRENT NEURAL NETWORKS
• A more complex type of neural network, recurrent neural networks take the output of a processing
node and transmit the information back into the network. This results in theoretical "learning" and
improvement of the network. Each node stores historical processes, and these historical processes are
reused in the future during processing.
• This becomes especially critical for networks in which the prediction is incorrect; the system
will attempt to learn why the correct outcome occurred and adjust accordingly. This type of neural
network is often used in text-to-speech applications.
CONVOLUTIONAL NEURAL NETWORKS
• Convolutional neural networks, also called ConvNets or CNNs, have several layers in which data is
sorted into categories.
• These networks have an input layer, an output layer, and a hidden multitude of convolutional layers in
between.
• The layers create feature maps that record areas of an image that are broken down further until they
generate valuable outputs.
• These layers can be pooled or entirely connected, and these networks are especially beneficial for image
recognition applications.
DECONVOLUTIONAL NEURAL NETWORKS
• Modular neural networks contain several networks that work independently from one
another.
• These networks do not interact with each other during an analysis process.
• Instead, these processes are done to allow complex, elaborate computing processes to
be done more efficiently.
• Similar to other modular industries such as modular real estate, the goal of the
network independence is to have each module responsible for a particular part of an
overall bigger picture.
PART III
CODES AND PROGRAMS
RELATED
TO NEURAL NETWORKS
• Neural networks can be interpreted in two ways. The “graph” way and the
“matrix” way.
• Scalar activationFunctionDerivative(Scalar x)
•{
• return 1 - tanhf(x) * tanhf(x);
•}
TRAINING NEURAL NETWORKS
• void NeuralNetwork::train(std::vector<RowVector*> input_data, std::vector<RowVector*>
output_data)
•{
• for (uint i = 0; i < input_data.size(); i++) {
• std::cout << "Input to neural network is : " << *input_data[i] << std::endl;
• propagateForward(*input_data[i]);
• std::cout << "Expected output is : " << *output_data[i] << std::endl;
• std::cout << "Output produced is : " << *neuronLayers.back() << std::endl;
• propagateBackward(*output_data[i]);
• std::cout << "MSE : " << std::sqrt((*deltas.back()).dot((*deltas.back())) /
deltas.back()->size()) << std::endl;
• }
•}
IMPORTANT CONCEPTUAL FACT
• Data Structure Aspect: Neural networks rely on various data structures to efficiently store and manipulate
information during computation. The fundamental data structure in a neural network is the artificial neuron or
perceptron. Neurons are interconnected in layers, forming a network architecture. Each neuron typically holds a
set of weights and biases that are adjusted during the learning process. The network structure can be represented
using data structures like arrays, matrices, or graphs, depending on the specific type of neural network.
• Algorithms Aspect: Neural networks employ algorithms for training and inference. Two key algorithmic
components of neural networks are forward propagation and backpropagation.
1. Forward Propagation: In forward propagation, input data flows through the network, layer by layer, from the
input layer to the output layer. The neurons in each layer perform computations using activation functions and
the weighted inputs from the previous layer. This process continues until the output layer produces a prediction
or an output value.
2. Backpropagation: Backpropagation is an algorithm used to train neural networks by adjusting the weights and
biases of the neurons. It works by propagating the error or the difference between the predicted output and the
desired output, backward through the network. The gradients of the error with respect to the weights and biases
are computed and used to update the parameters through optimization techniques like gradient descent.
• Additionally, various optimization algorithms, such as stochastic gradient descent, Adam, or RMSprop, are used
to efficiently update the weights and biases during the training process. These algorithms leverage concepts from
numerical optimization and iterative methods to find the optimal values for the network parameters.
• Neural Networks
Pros
• Can often work more efficiently and for longer than humans
• Can be programmed to learn from prior outcomes to strive to make smarter future
calculations
• Often leverage online services that reduce (but do not eliminate) systematic risk
• Are continually being expanded in new fields with more difficult problems
Cons
• Still rely on hardware that may require labor and expertise to maintain
• May take long periods of time to develop the code and algorithms
• May be difficult to assess errors or adaptions to the assumptions if the system is self-
learning but lacks transparency
• Usually report an estimated range or estimated amount that may not actualize
REAL LIFE APPLICATIONS
TOP 27 ARTIFICIAL NEURAL NETWORK
SOFTWARES