0% found this document useful (0 votes)
69 views15 pages

NNDL Assignment Ans

The document contains questions and answers related to neural networks and machine learning algorithms. It discusses associative memory networks, Hopfield networks, advantages of artificial neural networks, multi-layer perceptrons, unsupervised learning, Hamming networks, hetero-associative memory networks, and learning vector quantization. Specific examples are provided to explain concepts such as associating patterns in memory, training a Hamming network, implementing a hetero-associative memory, and classifying data with learning vector quantization.

Uploaded by

CS50 Bootcamp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views15 pages

NNDL Assignment Ans

The document contains questions and answers related to neural networks and machine learning algorithms. It discusses associative memory networks, Hopfield networks, advantages of artificial neural networks, multi-layer perceptrons, unsupervised learning, Hamming networks, hetero-associative memory networks, and learning vector quantization. Specific examples are provided to explain concepts such as associating patterns in memory, training a Hamming network, implementing a hetero-associative memory, and classifying data with learning vector quantization.

Uploaded by

CS50 Bootcamp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Assignment questions of

NNDL

1.(a)Explain about associative Memory Network


(b)What is Hopfield Network explain with example
2(a) Discuss the advantages of Artificial Neural network
(b) Explain multi-layered perceptron model.
3(a) What is unsupervised learning explain?
(b) explain Hamming network with example problem?
4(a) Explain Hetero associative memory network with example
problem.
(b) Explain with an example Learning Vector Quantisation method
1.(a)Explain about associative Memory Network
Ans :

An associate memory network refers to a content addressable memory


structure that associates a relationship between the set of input patterns and
output patterns. A content addressable memory structure is a kind of
memory structure that enables the recollection of data based on the
intensity of similarity between the input pattern and the patterns stored in
the memory.

Let's understand this concept with an example:

In this condition, this type of memory is robust and fault-tolerant because


of this type of memory model, and some form of error-correction
capability.

There are two types of associate memory- an auto-associative memory and


hetero associative memory.
Auto-associative memory:

An auto-associative memory recovers a previously stored pattern that most


closely relates to the current pattern. It is also known as an auto-
associative correlator.

Consider x[1], x[2], x[3],….. x[M], be the number of stored pattern


vectors, and let x[m] be the element of these vectors, showing
characteristics obtained from the patterns. The auto-associative memory
will result in a pattern vector x[m] when putting a noisy or incomplete
version of x[m].

Hetero-associative memory:

In a hetero-associate memory, the recovered pattern is generally different


from the input pattern not only in type and format but also in content. It is
also known as a hetero-associative correlator.
1)b) What is Hopfield Network explain with example ?

Ans :
2(a) Discuss the advantages of Artificial Neural network ?
Ans :

 Ability to learn complex relationships: ANNs can learn complex


relationships between inputs and outputs, even when those relationships
are non-linear or noisy. This makes them well-suited for a wide range of
tasks, including image recognition, natural language processing, and
financial forecasting.
 Robustness to noise and incomplete data: ANNs are robust to noise and
incomplete data, meaning that they can still produce accurate predictions
even when the input data is not perfect. This is important for many real-
world applications, where the data is often noisy or incomplete.
 Ability to generalize to new data: ANNs can generalize from the data
they have seen during training to new data that they have not seen before.
This means that they can be used to make predictions on real-world data
without the need to retrain the network.
 Scalability: ANNs can be scaled to handle large datasets and complex
problems. This makes them well-suited for use in large-scale applications,
such as image search and machine translation.
 Adaptability: ANNs can adapt and learn from data. They can
continuously improve their performance as they are exposed to more data,
making them suitable for tasks with changing or evolving patterns.
 Automation: Once trained, ANNs can automate tasks, reducing the need
for manual intervention. This automation can lead to efficiency
improvements and cost savings in various processes.
 Pattern Recognition: ANNs excel at pattern recognition tasks, such as
image and speech recognition. They can identify complex patterns in data
that may be challenging for humans to discern.

2)b) Explain multi-layered perceptron model.


Ans :
Perceptron can be defined as an Artificial Neuron or neural
network unit that helps to detect certain input data computations in
business intelligence and Perceptron is a building block of ANN.
Multi-Layered Perceptron Model:

Like a single-layer perceptron model, a multi-layer perceptron model also


has the same model structure but has a greater number of hidden layers.

The multi-layer perceptron model is also known as the Backpropagation


algorithm, which executes in two stages as follows:

Forward Stage: Activation functions start from the input layer in the
forward stage and terminate on the output layer.

Backward Stage: In the backward stage, weight and bias values are
modified as per the model's requirement. In this stage, the error between
actual output and demanded originated backward on the output layer and
ended on the input layer.

Hence, a multi-layered perceptron model has considered as multiple


artificial neural networks having various layers in which activation
function does not remain linear, similar to a single layer perceptron model.
Instead of linear, activation function can be executed as sigmoid, TanH,
ReLU, etc., for deployment.

A multi-layer perceptron model has greater processing power and can


process linear and non-linear patterns. Further, it can also implement logic
gates such as AND, OR, XOR, NAND, NOT, XNOR, NOR.

Advantages of Multi-Layer Perceptron:

A multi-layered perceptron model can be used to solve complex non-


linear problems.

It works well with both small and large input data.

Disadvantages of Multi-Layer Perceptron :

In Multi-layer perceptron, computations are difficult and time-


consuming.

In multi-layer Perceptron, it is difficult to predict how much the


dependent variable affects each independent variable.

The model functioning depends on the quality of the training.

Perceptron Function

Perceptron function ''f(x)'' can be achieved as output by multiplying the


input 'x' with the learned weight coefficient 'w'.

Mathematically, we can express it as follows:

f(x)=1; if w.x+b>0

otherwise, f(x)=0

'w' represents real-valued weights vector

'b' represents the bias

'x' represents a vector of input x values.


3)a) What is unsupervised learning explain?
Ans :

Characteristics are:
1)Target O/p
2)Data Exploration & Pattern Discovery.
3)Common unsupervised learning tasks.
4)use cases.
3)b) explain Hamming network with example problem?
Ans :
Hamming Network :
In most of the neural networks using unsupervised learning, it is essential
to compute the distance and perform comparisons. This kind of network is
Hamming network, where for every given input vectors, it would be
clustered into different groups.
Following are some important features of Hamming Networks −
 Lippmann started working on Hamming networks in 1987.
 It is a single layer network.
 The inputs can be either binary {0, 1} of bipolar {-1, 1}.
 The weights of the net are calculated by the exemplar vectors.
 It is a fixed weight network which means the weights would remain
the same even during training.

 A Hamming Network is a type of artificial neural network (ANN) that


is primarily used for pattern recognition and error correction tasks. It is a
single-layer feedforward network with binary threshold neurons and a
weight matrix that represents the stored patterns.
 To train a Hamming network, you present it with a set of training
patterns, one at a time. The network then adjusts the weights of its
connections so that the Hamming distance between the network's output
and the target pattern is minimized. The Hamming distance between two
patterns is the number of bits that differ between the two patterns.
 Once the network is trained, you can use it to classify new patterns. To
classify a new pattern, you present it to the network and the network
outputs a pattern that is closest to the input pattern in terms of Hamming
distance.
Example :
4(a) Explain Hetero associative memory network with example
problem ?
Ans :
Hetero-associative memory:

In a hetero-associate memory, the recovered pattern is generally different


from the input pattern not only in type and format but also in content. It is
also known as a hetero-associative correlator.
Consider we have a number of key response pairs {a(1), x(1)}, {a(2),x(2)},
…..,{a(M), x(M)}. The hetero-associative memory will give a pattern
vector x(m) when a noisy or incomplete version of the a(m) is given.

Neural networks are usually used to implement these associative memory


models called neural associative memory (NAM). The linear associate is
the easiest artificial neural associative memory.

These models follow distinct neural network architecture to memorize


data.

(Wij)k = (pi)k (qj)k

Where,

(Pi)k represents the ith component of pattern pk, and

(qj)k represents the jth component of pattern qk

Where,

strong>i = 1,2, …,m and j = 1,2,…,n.

Constructing the association weight matrix w is accomplished by adding


the individual correlation matrices wk , i.e.,
Where α = Constructing constant.

Example : REFER THIS BELOW LINK AND WRITE

https://drive.google.com/file/d/
14PIzMjRJooyF64S5KqbuE4Zvzkahx5EV/view?usp=sharing

4(b) Explain with an example Learning Vector Quantisation method ?


Ans :
Learning Vector Quantization (LVQ) is a type of machine learning
algorithm used for supervised classification. LVQ combines elements
of both supervised learning and unsupervised learning, as it is trained
with labeled data but employs a competitive learning process inspired
by unsupervised vector quantization. It works by training a set of
vectors, called codebook vectors, to represent the different classes of
data. Once the codebook vectors are trained, new data points are
classified by assigning them to the class of the codebook vector that is
closest to them. LVQ is similar to the K-Means clustering algorithm, but
it is specifically designed for classification.
Example :
 Here is an example of how LVQ can be used for classification:
Suppose we have a dataset of images of cats and dogs. We want to train an
LVQ model to classify new images into either the cat or dog class.
First, we need to choose a number of codebook vectors. Let's say we
choose 10 codebook vectors.
Next, we need to initialize the codebook vectors. We can do this by
randomly selecting 10 data points from the training dataset.
Now, we can start training the LVQ model. We iterate over the training
dataset and do the following for each data point:
1. Find the codebook vector that is closest to the data point.
2. If the codebook vector is of the same class as the data point, then we
move the codebook vector closer to the data point.
3. If the codebook vector is of a different class than the data point, then
we move the codebook vector further away from the data point.
We repeat this process for all of the data points in the training dataset.
Once the LVQ model is trained, we can use it to classify new images. To
classify a new image, we simply find the codebook vector that is closest to
the new image and assign the new image to the class of the codebook
vector.

You might also like