0% found this document useful (0 votes)
51 views10 pages

Neurall

A neural network is comprised of layers of artificial neurons that receive and process input data. The input data is passed through an input layer, hidden layers, and an output layer. In each layer, neurons are connected via weighted connections, and the output of each neuron is determined by an activation function. Through processing input data in this way and adjusting the weights, neural networks can learn patterns in the data and produce useful outputs.

Uploaded by

Dania Dallah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views10 pages

Neurall

A neural network is comprised of layers of artificial neurons that receive and process input data. The input data is passed through an input layer, hidden layers, and an output layer. In each layer, neurons are connected via weighted connections, and the output of each neuron is determined by an activation function. Through processing input data in this way and adjusting the weights, neural networks can learn patterns in the data and produce useful outputs.

Uploaded by

Dania Dallah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

The outline of this presentation is …

Introduction:

• It is an information processing model inspired by biological nervous systems, such


as our brain. It is made of artificial neurons that receive and process input data. Data
is passed through the input layer, the hidden layer, and the output layer.

• Structure: large number of highly interconnected processing elements (neurons)


working together. Like people, they learn from experience (by example).

What does it do?

A neural network process starts when input data is fed to it. Data is then processed via
its layers to provide the desired output.

It looks at data and tries to figure out the function, or set of calculations, that turn the
input (variables) into the output. That output could be a number, a probability, or even
something a bit more complicated.

Neural networks are analogous to robots that can learn to make things, like a toy car,
not by following step by step instructions from humans, but by looking at a bunch of
toy cars and figuring out for itself how to turn inputs (like metal and plastic) into
outputs (the toy cars).

Have you ever wondered how your brain recognizes images? No matter what or how the
image looks, the brain can tell that this is an image of a cat and not a dog. The brain relates to
the best possible pattern and concludes the result.

Example:

Consider a scenario where you have a set of labeled images, and you have to classify the
images based on if it is a dog or a cat.
To create a neural network that recognizes images of cats and dogs. The network starts by
processing the input. Each image is made of pixels. For example, the image dimensions
might be 20 X 20 pixels that make 400 pixels. Those 400 pixels would make the first layer of
our neural network.

Application:

Layer Diagram

A neural network is nothing more than a bunch of neurons connected together. Here’s what a
simple neural network might look like:

This network has 2 inputs, a hidden layer with 2 neurons (h1 and h2), and an output layer
with 1 neuron (o1).

Notice that the inputs for o1 are the outputs from h1 and h2 — that’s what makes this a
network.

A hidden layer is any layer between the input (first) layer and output (last) layer. There can
be multiple hidden layers!

First and Last Layers:

A layer can be one of the three types: First is the input layer. Here is an example of input
layer. If a layer is of input type, it will accept the data and pass it to the rest of the Network.
last type of layer is the output layer. Output layer holds the result or the output of the
problem. Raw images get passed to the input layer and we receive an output whether it is an
emergency or non-emergency vehicle in the output layer.

Middle Layer:

It’s pretty easy what the input and output nodes are, since they’re values we understand. But
it can be harder to grasp exactly what the middle layers represent.
Hidden layers are either one or more in number. For a neural network here, in this case, the
number is 2.

Hidden layers are the one actually responsible for the excellent performance and complexity
of neural networks. They perform multiple functions at the same time such as data
transformation, automatic feature creation etc.

You can think of all the calculations that happen between the input nodes and output nodes as
something called “feature generation”. “Feature” is just a fancy word for a variable that can
be made up of a combination of other variables. For example we could use your grades,
attendance, and test scores to create a “Feature” called Academic Performance. Essentially
the neural network is taking the variables we give it, and performing combinations and
calculations to create new values, or “features”. Then, it combines those “features” to create
an output.

Math Behind NN:

In a neural network, neurons dominate one another. Each layer is made of neurons. Once the
input layer receives data, it is redirected to the hidden layer. Each input is assigned
with weights.

The weight is a value in a neural network that converts input data within the network’s
hidden layers. Weights work by input layer, taking input data, and multiplying it by the
weight value.

It then initiates a value for the first hidden layer. The hidden layers transform the input data
and pass it to the other layer. The output layer produces the desired output.

The inputs and weights are multiplied, and their sum is sent to neurons in the hidden
layer. Bias is applied to each neuron. Each neuron adds the inputs it receives to get the sum.
This value then transits through the activation function.
The activation function outcome then decides if a neuron is activated or not. An activated
neuron transfers information into the other layers. With this approach, the data gets generated
in the network until the neuron reaches the output layer.

Feed-forward and Backpropagation

Another name for this is forward propagation. Feed-forward propagation is the process of
inputting data into an input node and getting the output through the output node.

Feed-forward propagation takes place when the hidden layer accepts the input data. Processes
it as per the activation function and passes it to the output. The neuron in the output layer
with the highest probability then projects the result.

If the output is wrong, backpropagation takes place. While designing a neural network,
weights are initialized to each input. Backpropagation means re-adjusting each input’s
weights to minimize the errors, thus resulting in a more accurate output.

Example:

Assume we have a 2-input neuron that uses the sigmoid activation function and has the
following parameters:

w=[0, 1] is just a way of writing w1=0, w2=1 in vector form. Now, let’s give the neuron an
input of x=[2, 3]. We’ll use the dot product to write things more concisely:
The neuron outputs 0.999 given the inputs x=[2,3]. That’s it! This process of passing inputs
forward to get an output is known as feedforward.

Coding a Neuron

Time to implement a neuron! We’ll use NumPy, a popular and powerful computing library
for Python, to help us do math:

Recognize those numbers? That’s the example we just did! We get the same answer of 0.999.
A neural network is made of artificial neurons that receive and process
input data. Data is passed through the input layer, the hidden layer, and the
output layer.

A neural network process starts when input data is fed to it. Data is then
processed via its layers to provide the desired output.

Neural Networks are complex systems with artificial neurons.

Artificial neurons or perceptron consist of:

 Input
 Weight
 Bias
 Activation Function
 Output

The neurons receive many inputs and process a single output.

Neural networks are comprised of layers of neurons.

These layers consist of the following:

 Input layer
 Multiple hidden layers
 Output layer
The input layer receives data represented by a numeric value. Hidden
layers perform the most computations required by the network. Finally, the
output layer predicts the output.

In a neural network, neurons dominate one another. Each layer is made of


neurons. Once the input layer receives data, it is redirected to the hidden
layer. Each input is assigned with weights.

The weight is a value in a neural network that converts input data within
the network’s hidden layers. Weights work by input layer, taking input
data, and multiplying it by the weight value.

It then initiates a value for the first hidden layer. The hidden layers
transform the input data and pass it to the other layer. The output layer
produces the desired output.

The inputs and weights are multiplied, and their sum is sent to neurons in
the hidden layer. Bias is applied to each neuron. Each neuron adds the
inputs it receives to get the sum. This value then transits through the
activation function.

The activation function outcome then decides if a neuron is activated or


not. An activated neuron transfers information into the other layers. With
this approach, the data gets generated in the network until the neuron
reaches the output layer.

Another name for this is forward propagation. Feed-forward propagation is


the process of inputting data into an input node and getting the output
through the output node. (We’ll discuss feed-forward propagation a bit
more in the section below).
Google Translate is just one of the several applications of neural networks. Neural networks
form the base of deep learning. A subfield of machine learning where the algorithms are
inspired by the structure of the human brain neural networks take in data, train themselves to
recognize the patterns in this data, and then predict the outputs for a new set of similar data.
Let's understand how this is done. Let's construct a neural network that differentiates between
a square, circle, and triangle.
Neural networks are made up of layers of neurons where these neurons are the core
processing units of the network.
First, we have the input layer which receives the input. The output layer predicts our final
output in between exist. The hidden layers, which perform most of the computations required
by our network. Here's an image of a circle. This image is composed of 28 by 28 pixels
which make up for 784 pixels. Each pixel is fed as input to each neuron of the first layer.
Neurons of one layer are connected to neurons of the next layer through channels. Each of
these channels is assigned a numerical value known as weight. The inputs are multiplied to
the corresponding weights and their sum is sent as input to the neurons in the hidden layer.
Each of these neurons is associated with a numerical value called the bias, which is then
added to the input sum. This value is then passed through a threshold function called the
activation function. The result of the activation function determines if the particular neuron
will get activated or not. An activated neuron transmits data to the neurons of the next layer
over the channels. In this manner, the data is propagated through the network. This is called
forward propagation. In the output layer, the neuron with the highest value fires and
determines the output the values are basically a probable.

For example, here are near unassociated with square has the highest probability hence that's
the output predicted by the neural network of course just by a look at it we know our neural
network has made a wrong prediction but how does the network figure this out note that our
network is yet to be trained during this training process along with the input our network also
as the output fed to it the predicted output is compared against the actual output to realize the
error in prediction the magnitude of the error indicates how wrong we are in the sign suggests
if our predicted values are higher or lower than expected the arrows here give an indication of
the direction and magnitude of change to reduce the error this information is then transferred
backward through our network this is known as back propagation now based on this
information the weights are adjusted this cycle of forward propagation and back propagation
is iteratively performed with multiple inputs this process continues until our weights are
assigned such that the network can predict the shapes correctly.

Example:

Assume we have a 2-input neuron that uses the sigmoid activation


function and has the following parameters:

w=[0, 1] is just a way of writing w1=0, w2=1 in vector form. Now,


let’s give the neuron an input of x=[2, 3]. We’ll use the dot
product to write things more concisely:

The neuron outputs 0.999 given the inputs x=[2,3]. That’s it! This
process of passing inputs forward to get an output is known
as feedforward.

You might also like