0% found this document useful (0 votes)
18 views4 pages

dpt4 Answer Key

The document provides an answer key for a neural networks and deep learning course, detailing definitions and explanations of Spiking Neural Networks and various layers in Neural Networks. It elaborates on Convolutional Neural Networks (CNNs), including their types of layers such as convolutional, pooling, and fully-connected layers, and describes their functions and processes. Additionally, it discusses key concepts like feature maps, activation functions, and the importance of parameters in CNNs.

Uploaded by

R GAYATHRI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views4 pages

dpt4 Answer Key

The document provides an answer key for a neural networks and deep learning course, detailing definitions and explanations of Spiking Neural Networks and various layers in Neural Networks. It elaborates on Convolutional Neural Networks (CNNs), including their types of layers such as convolutional, pooling, and fully-connected layers, and describes their functions and processes. Additionally, it discusses key concepts like feature maps, activation functions, and the importance of parameters in CNNs.

Uploaded by

R GAYATHRI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

lOMoARcPSD|31606405

AALIM MUHAMMED SALEGH COLLEGE OF ENGINEERING


DEPARTMENT OF INFORMATION TECHNOLOGY
CCS355–Neural networks and Deep learning
YEAR/SEM:III/VI
Date: 18/03/2024
DPT -4 Answer Key
PART-A (2*2=4 Marks)
1.Define Spiking neural network.
The Spiking Neural Network (SNN) is the third generation of neural network models, built with
specialized network topologies that redefine the entire computational process
2.Name the various layers present in Neural network.
Neural networks are a subset of machine learning, and they are at the heart of deep learning algorithms.
They are comprised of node layers, containing an input layer, one or more hidden layers, and an output
layer
PART-B (1*12=12 Marks)
3. Explain in detail about Convolutional Neural Network along with its types.
Convolutional neural networks are distinguished from other neural networks by their superior
performance with image, speech, or audio signal inputs. They have three main types of layers, which are:
 Convolutional layer
 Pooling layer
 Fully-connected (FC) layer
The convolutional layer is the first layer of a convolutional network. While convolutional layers can be
followed by additional convolutional layers or pooling layers, the fully-connected layer is the final layer.
With each layer, the CNN increases in its complexity, identifying greater portions of the image. Earlier layers
focus on simple features, such as colors and edges. As the image data progresses through the layers of
the CNN, it starts to recognize larger elements or shapes of the object until it finally identifies the intended
object.
Convolutional layer
The convolutional layer is the core building block of a CNN, and it is where the majority of
computation occurs. It requires a few components, which are input data, a filter, and a feature map. Let’s
assume that the input will be a color image, which is made up of a matrix of pixels in 3D. This means that the
input will have three dimensions—a height, width, and depth—which correspond to RGB in an image. We
also have a feature detector, also known as a kernel or a filter, which will move across the receptive fields of
the image, checking if the feature is present. This process is known as a convolution.
The feature detector is a two-dimensional (2-D) array of weights, which represents part of the image. While
they can vary in size, the filter size is typically a 3x3 matrix; this also determines the size of the receptive
field. The filter is then applied to an area of the image, and a dot product is calculated between the input
pixels and the filter. This dot product is then fed into an output array. Afterwards, the filter shifts by a stride,
repeating the process until the kernel has swept across the entire image. The final output from the series of
dot products from the input and the filter is known as a feature map, activation map, or a convolved feature.
Note that the weights in the feature detector remain fixed as it moves across the image, which is also known as
parameter sharing. Some parameters, like the weight values, adjust during training through the process of
backpropagation and gradient descent. However, there are three hyperparameters which affect the volume size
of the output that need to be set before the training of the neural network begins.
These include:
1. The number of filters affects the depth of the output. For example, three distinct filters would yield
three different feature maps, creating a depth of three.
2. Stride is the distance, or number of pixels, that the kernel moves over the input matrix. While stride
values of two or greater is rare, a larger stride yields a smaller output.
3. Zero-padding is usually used when the filters do not fit the input image. This sets all elements that fall
outside of the input matrix to zero, producing a larger or equally sized output. There are three types of
padding.
lOMoARcPSD|31606405

Valid padding: This is also known as no padding. In this case, the last convolution is
dropped if dimensions do not align.
 Same padding: This padding ensures that the output layer has the same size as the input layer
 Full padding: This type of padding increases the size of the output by adding zeros
to the border of the input.
After each convolution operation, a CNN applies a Rectified Linear Unit (ReLU)
transformation to the feature map, introducing nonlinearity to the model.

Additional convolutional layer


As we mentioned earlier, another convolution layer can follow the initial convolution layer. When
this happens, the structure of the CNN can become hierarchical as the later layers can see the pixels within
the receptive fields of prior layers. As an example, let’s assume that we’re trying to determine if an image
contains a bicycle. You can think of the bicycle as a sum of parts. It is comprised of a frame, handlebars,
wheels, pedals, et cetera. Each individual part of the bicycle makes up a lower-level pattern in the neural net,
and the combination of its parts represents a higher-level pattern, creating a feature hierarchy within the
CNN. Ultimately, the convolutional layer converts the image into numerical values, allowing the neural
network to let and extract relevant patterns.

Pooling layer
lOMoARcPSD|31606405

Pooling layers, also known as downsampling, conducts dimensionality reduction, reducing the
number of parameters in the input. Similar to the convolutional layer, the pooling operation sweeps a filter
across the entire input, but the difference is that this filter does not have any weights. Instead, the kernel
applies an aggregation function to the values within the receptive field, populating the output array. There
are two main types of pooling:
 Max pooling: As the filter moves across the input, it selects the pixel with the maximum
value to send to the output array. As an aside, this approach tends to be used more often
compared to average pooling.
 Average pooling: As the filter moves across the input, it calculates the average value
within the receptive field to send to the output array.
While a lot of information is lost in the pooling layer, it also has a number of benefits to the CNN.
They help to reduce complexity, improve efficiency, and limit risk of overfitting.

Fully-connected layer
The name of the full-connected layer aptly describes itself. As mentioned earlier, the pixel
values of the input image are not directly connected to the output layer in partially connected
layers. However, in the fully-connected layer, each node in the output layer connects directly to a
node in the previous layer.
This layer performs the task of classification based on the features extracted through the previous
layers and their different filters. While convolutional and pooling layers tend to use ReLu
functions, FC layers usually leverage a softmax activation function to classify inputs appropriately,
producing a probability from 0 to 1.
lOMoARcPSD|31606405

You might also like