NN Assignment PDF
NN Assignment PDF
ETEG 425
Assignment-1
Submission Deadline: 25 September, 2020
1. Compare physical neuron and artificial neuron. Write the advantages and disadvantages
of Artificial Neural Networks.
2. Mention the linear and nonlinear activation functions used in Artificial Neural Networks.
3. Distinguish between linearly separable and nonlinearly separable problems. Give
examples.
4. What is perceptron? Write the differences between Single Layer Perceptron(SLP) and
Multilayer Perceptron (MLP).
5. Explain Why XOR problem can not be solved by a single layer perceptron and how it is
solved by a Multilayer Perceptron.
6. Give some examples for Nonrecurrent and Recurrent ANNs. Specify the learning law
used by each ANN.
7. Explain in Detail how weights are adjusted in the different types of Learning Law.(Both
supervised and Unsupervised)
8. Write short notes on the following.
a. Learning Rate Parameter
b. Momentum
c. Stability
d. Convergence
e. Generalization
9. Consider a 4 input, 1 output parity detector. The output is 1 if the number of inputs is
even. Otherwise, it is 0. Is this problem linearly separable? Justify your answer.
10. A two layer network is to have four inputs and six outputs. The range of the outputs is to
be continuous between 0 and 1. What can you tall about the network architecture?
Specifically,
(a) How many neurons are required in each layer?
(b) What are the dimensions of the first-layer and second layer weight matrices? (Hidden
layer neurons are 5)
(c) What kinds of transfer functions can be used in each layer?
11. Draw the architecture of a single layer perceptron (SLP) and explain its operation.
Mention its advantages and disadvantages.
12. Draw the architecture of a Multilayer perceptron (MLP) and explain its operation.
Mention its advantages and disadvantages.
13. Define Hebbian Synapse. Explain Hebbian Learning.
14. Write the differences between Autossociative and HeteroAssociative memories. Explain
Generalization.
15. Define the term clustering in ANN. How will you measure the clustering similarity?
Neural Network and Fuzzy Logic
ETEG 425
Assignment-2
Submission Deadline: 2 October, 2020
1. Describe the feature of ART network. Write the differences between ART 1 and ART 2.
2. What is meant by stability plasticity dilemma in ART network? What are the applications
of ART?
3. What are the two processes involved in RBF network design? List some applications of
RBF network.
4. What are the basic computational needs for Hardware implementation
of ANN?
5. Explain the clustering method using Learning Vector Quantization.
6. Draw the architecture of SOM and explain in detail. Explain the SOM algorithm.
7. Distinguish between auto correlator and hetero correlator structures.
8. List the role of hidden layers in a Multilayer FeedForward network.
9. What is GDR? Write the weight update equations for hidden layer and output layer
weights.
10. Draw the architecture of Back Propagation Network (BPN) and explain in detail. List out
some applications of BPN.
11. What is competitive learning network? Give examples.What is Self-Organizing network?
Give examples.
12. Distinguish between Supervised and Unsupervised Learning.
13. Draw the basic topologies for (a) Nonrecurrent and (b) Recurrent Networks and
distinguish between them.
14. Give some examples for Nonrecurrent and Recurrent ANNs. Specify the learning law
used by each ANN.
Neural Network and Fuzzy Logic
ETEG 425
Assignment-3
Submission Deadline: 9 October, 2020
1. Define Adaptive System and Generalization.
2. Mention the characteristics of problems suitable for ANNs.
3. What are the design parameters of ANN? List some applications of ANNs.
4. Distinguish between Learning and Training.
5. Draw the architecture of RBF network and explain in detail. List some applications of
RBF network.
6. Compare Kohonen Self Organizing Map and Learning Vector Quantization in brief.
7. What is the basic concept behind Adaptive Resonance Theory (ART)? What is
plasticity with reference to neural networks?
8. A Kohonen self-organizing map is shown with weights in the Fig. below.
i) Find the cluster unit Cjthat is closest to the input vector (0.3, 0.4) using the square
of the Euclidean distance.
ii) Using a learning rate of 0.3, Find the new weights for unit Cj.
iii) Find new weights for Cj-1 and Cj+1, if they are allowed to learn.