Ann QB
Ann QB
PART-A
PART-B
PART-B
1. Draw the structure of a biological Neuron and explain in detail.
2. (a) Explain the three basic neurons which are used to develop complex ANN.
(b) Write the differences between MP neuron and WLIC-T and
Perceptron.
3. (a) Write short notes on
i. Sigmoid Squashing Function
ii. Extensions to sigmoid
(b)Develop simple ANNs to implement the three input AND, OR and XOR functions using MP neurons.
4. State and Prove Perceptron Convergence theorem.
5. (a) Draw the architecture of a single layer perceptron (SLP) and explain its operation. Mention its
advantages and disadvantages.
(b) Draw the architecture of a Multilayer perceptron (MLP) and explain its operation. Mention its
advantages and disadvantages.
6. Explain Why XOR problem can not be solved by a single layer perceptron and how it is solved by a
Multilayer Perceptron.
7. Explain ADALINE and MADALINE. List some applications.
8. (a) Distinguish between Perceptron Learning law and LMS Learning law.
(b) Give the output of the network given below for the input [1 1 1]T
9. (a) Explain the logic functions performed by the following networks with MP neurons given below.
(b) Design ANN using MP neurons to realize the following logic functions
using ±1 for the weights.
s(a1,a2,a3) =
s(a1,a2,a3) =
UNIT-III
PART-A
1. What is meant by mapping problem and mapping network?
2. What is a linear associative network?
3. Distinguish between nearest neighbor recall and interpolative recall.
4. Mention the desirable properties of Pattern Associator.
5. Distinguish between auto correlator and hetero correlator structures.
6. Define Hebbian Synapse.
7. List some issues that we have to consider to design a feed forward net
for a specific application.
8. Draw the overall feed forward net based strategy (implementation and training).
9. List the role of hidden layers in a Multilayer FeedForward network.
10. What is GDR? Write the weight update equations for hidden layer and
output layer weights.
11. Draw the flow chart of overall GDR procedure.
12. Draw the architecture of layered feedforward architecture.
13. Draw the feedforward architecture for ANN based compressor.
14. Distinguish between Pattern Mode and Batch Mode.
15. What is local minimum and global minimum?
16. Explain how the network training time and accuracy is influenced by the size of the hidden layer.
17. List out some applications of BPN.
18. What are the two types of signals identified in the BackPropagation network?
19. Why the layers in the Bidirectional Associative Memory are called x and y layers?
PART-B
1. Explain Hebbian Learning.
2. Draw the architecture of Back Propagation Network (BPN) and
explain in detail.
3. Derive GDR for a MLFF network
4. (a) Explain the significance of adding momentum to the training procedure.
(b) Write the algorithm of generalized delta rule(Back Propagation Algorithm).
5. Draw the architecture of Bidirectional Associative memory(BAM) and
explain in detail.
UNIT-IV
PART-A
1. What do you mean by Weight Space in Feedforward Neural Networks?
2. How can you perform search over weight space?
3. How will you determine the characteristics of a training algorithm?
4. What are the effects of error surface on training algorithms?
5. What is premature saturation in the error surface?
6. What is saddle point in the error surface?
7. What are the two types of transformations which results in symmetries in weight spaces? Explain in
brief.
8. What is meant by generalization?
9. What are Ontogenic Neural Networks? Mention their advantages.
10. Distinguish between constructive and destructive methods for network
topology modification.
11. Write the differences between Cascade Correlation(CC) network and
Layered Feedforward network.
12. Write the quickprop weight correction algorithm for Cascade
Correlation Network.
13. Define residual output error.
14. Define pruning.
15. Write the applications of Cascade Correlation network.
16. How will you identify superfluous neurons in the hidden layer?
17. What do you mean by network inversion?
18. Write the differences between HeteroAssociative Memories and interpolative associative memories.
19. Write the differences between Autossociative and HeteroAssociative
memories.
PART-B
1. Explain Generalization.
2. What are the major features of Cascade Correlation Network? Draw the
architecture of a cascade correlation network and explain in detail.
3. Explain how a feedforward network size can be minimized.
4. Explain the stochastic optimization methods for weight determination.
5. (a)Explain the methods for network topology determination.
(b) What are the costs involved in weights and explain how it is minimized?
6. Draw the architecture of Cascade Correlation Network and explain in detail.
7. Explain the method pruning by weight decay to minimize the neural network size.
8. Explain in detail how the superfluous neurons are determined and the
network is pruned.
UNIT-V
PART-A
1. What is competitive learning network? Give examples.
2. What is Self-Organizing network? Give examples.
3. Define the term clustering in ANN.
4. What is c-means algorithm?
5. How will you measure the clustering similarity?
6. What is on-centre off surround technique?
7. Describe the feature of ART network.
8. Write the differences between ART 1 and ART 2.
9. What is meant by stability plasticity dilemma in ART network?
10. What is 2/3rd rule in ART?
11. What are the two subsystems in ART network?
12. What are the applications of ART?
13. What are the two processes involved in RBF network design?
14. List some applications of RBF network.
15. What are the basic computational needs for Hardware implementation
of ANN?
PART-B
1. Explain the architecture and components of Competitive Learning
Neural Network with neat diagram.
2. Explain the clustering method Learning Vector Quantization.
3. Draw the architecture of SOM and explain in detail.
4. Explain the SOM algorithm.
5. Draw the architecture of ART1 network and explain in detail.
6. Explain ART1 algorithm.
7. Draw the architecture of RBF network and explain in detail.
8. Explain Time Delay Neural Network.