An Overview of Neural Network
An Overview of Neural Network
Email address:
Received: May 8, 2019; Accepted: June 17, 2019; Published: June 29, 2019
Abstract: Neural networks represent a brain metaphor for information processing. These models are biologically inspired
rather than an exact replica of how the brain actually functions. Neural networks have been shown to be very promising systems
in many forecasting applications and business classification applications due to their ability to learn from the data. This article
aims to provide a brief overview of artificial neural network. The artificial neural network learns by updating the network
architecture and connection weights so that the network can efficiently perform a task. It can learn either from available training
patterns or automatically learn from examples or input-output relations. Neural network-based models continue to achieve
impressive results on longstanding machine learning problems, but establishing their capacity to reason about abstract concepts
has proven difficult. Building on previous efforts to solve this important feature of general-purpose learning systems, our latest
paper sets out an approach for measuring abstract reasoning in learning machines, and reveals some important insights about the
nature of generalization itself. Artificial neural networks can learn by example like the way humans do. An artificial neural net
is configured for a specific application like pattern recognition through a learning process. Learning in biological systems
consists of adjustments to the synaptic connections that exist between neurons. This is true of artificial neural networks as well.
Artificial neural networks can be applied to an increasing number of real-world problems of considerable complexity. They are
used for solving problems that are too complex for conventional technologies or those types of problems that do not have an
algorithmic solution.
Figure 1. Neuron.
is calculated by the euclidean distance, the neuron with the networks with each other, which in turn will increase the
least distance wins. Through the iterations, all the points are computation speed. However, the processing time will depend
clustered and each neuron represents each kind of cluster. on the number of neurons and their involvement in computing
Kohonen Neural Network is used to recognize patterns in the results.
the data. Its application can be found in medical analysis to
cluster data into different categories. Kohonen map was able
to classify patients having glomerular or tubular with an high 3. Artificial Neural Network (ANN)
accuracy. Here is a detailed explanation of how it is Algorithm Work
categorized mathematically using the euclidean distance
algorithm. Below is an image displaying a comparison A typical neural network has anything from a few dozen to
between a healthy and a diseased glomerular [7]. hundreds, thousands, or even millions of artificial neurons
called units arranged in a series of layers, each of which
2.4. Recurrent Neural Network (RNN) – Long Short Term connects to the layers on either side. Some of them, known as
Memory input units, are designed to receive various forms of
information from the outside world that the network will
The Recurrent Neural Network works on the principle of attempt to learn about, recognize, or otherwise process. Other
saving the output of a layer and feeding this back to the input units sit on the opposite side of the network and signal how it
to help in predicting the outcome of the layer. responds to the information it's learned; those are known as
Here, the first layer is formed similar to the feed forward output units. In between the input units and output units are
neural network with the product of the sum of the weights and one or more layers of hidden units, which, together, form the
the features. The recurrent neural network process starts once majority of the artificial brain. Most neural networks are fully
this is computed, this means that from one time step to the next connected, which means each hidden unit and each output unit
each neuron will remember some information it had in the is connected to every unit in the layers either side. The
previous time-step. This makes each neuron act like a memory connections between one unit and another are represented by a
cell in performing computations. In this process, we need to let number called a weight, which can be either positive (if one
the neural network to work on the front propagation and unit excites another) or negative (if one unit suppresses or
remember what information it needs for later use. Here, if the inhibits another). The higher the weight, the more influence
prediction is wrong we use the learning rate or error correction one unit has on another. (This corresponds to the way actual
to make small changes so that it will gradually work towards brain cells trigger one another across tiny gaps called synapses
making the right prediction during the back propagation. [9].
2.5. Convolutional Neural Network 3.1. Formulation of Neural Network: A Simple Neural
Convolutional neural networks are similar to feed forward Network can Be Represented as Shown in the Figure
neural networks, where the neurons have learn-able weights Below
and biases. Its application has been in signal and image
processing which takes over OpenCV in field of computer
vision.
ConvNet are applied in techniques like signal processing
and image classification techniques. Computer vision
techniques are dominated by convolutional neural networks
because of their accuracy in image classification. The
technique of image analysis and recognition, where the
agriculture and weather features are extracted from the open
source satellites like LSAT to predict the future growth and
yield of a particular land are being implemented [8]. Figure 3. Input - Hidden layer – Output.
2.6. Modular Neural Network The linkages between nodes are the most crucial finding in
Modular Neural Networks have a collection of different an ANN. We will get back to “how to find the weight of each
networks working independently and contributing towards the linkage” after discussing the broad framework. The only
output. Each neural network has a set of inputs which are known values in the above diagram are the inputs. Lets call the
unique compared to other networks constructing and inputs as I1, I2 and I3, Hidden states as H1, H2, H3 and H4,
performing sub-tasks. These networks do not interact or signal Outputs as O1 and O2 [7]. The weights of the linkages can be
each other in accomplishing the tasks. The advantage of a denoted with following notation:
modular neural network is that it breakdowns a large W(I1H1) is the weight of linkage between I1 and H1 nodes.
computational process into smaller components decreasing Following is the framework in which artificial neural
the complexity. This breakdown will help in decreasing the networks (ANN) work:
number of connections and negates the interaction of these
10 Mohaiminul Islam et al.: An Overview of Neural Network
3.2. Few Statistical Details about the Framework linkage between hidden nodes and the input nodes in a similar
fashion. Imagine, that this calculation is done multiple times
Every linkage calculation in an Artificial Neural Network for each of the observation in the training set [10].
(ANN) is similar. In general, we assume a sigmoid
relationship between the input variables and the activation rate
of hidden nodes or between the hidden nodes and the 4. Backpropagation
activation rate of output nodes. Let’s prepare the equation to Back propagation is a method used in artificial neural
find activation rate of H1. networks to calculate a gradient that is needed in the
Logit (H1) = W(I1H1) * I1 + W(I2H1) * I2 + W(I3H1) * I3 + calculation of the weights to be used in the network. Back
Constant = f propagation is shorthand for "the backward propagation of
errors," since an error is computed at the output and
= > P(H1) = 1/(1+e^(-f)) distributed backwards throughout the network’s layers. It is
commonly used to train deep neural networks.
Following is how the sigmoid relationship looks like:
Back propagation is a generalization of the delta rule to
muti-layered feed forward networks, made possible by using
the chain rule to alliterative compute gradients for each layer.
It is closely related to the Gauss–Newton algorithm and is part
of continuing research in neural back propagation.
Back propagation is a special case of a more general
technique called automatic differentiation. In the context of
learning, back propagation is commonly used by the gradient
descent optimization algorithm to adjust the weight of neurons
by calculating the gradient of the loss function [11].
4.1. ANN and DNN Concepts Relevant to Backpropagation
Figure 5. Sigmoid function”. Here are several neural network concepts that are important
to know before learning about backpropagation:
3.3. The Weights re-Calibrated Inputs
Source data fed into the neural network, with the goal of
Re-calibration of weights is an easy, but a lengthy process.
making a decision or prediction about the data. The data is
The only nodes where we know the error rate are the output
broken down into binary signals, to allow it to be processed by
nodes. Re-calibration of weights on the linkage between
single neurons—for example an image is input as individual
hidden node and output node is a function of this error rate on
pixels.
output nodes. It can be statistically proved that:
Error @ H1 = W(H1O1)*Error@O1 + W(H1O2)*Error@O2 4.2. Training Set
Using these errors we can re-calibrate the weights of A set of outputs for which the correct outputs are known, it
American Journal of Neural Networks and Applications 2019; 5(1): 7-11 11