0% found this document useful (0 votes)
22 views5 pages

Ann QB

The document outlines a comprehensive curriculum on Artificial Neural Networks (ANNs), covering definitions, classifications, architectures, learning laws, and applications. It includes detailed comparisons between conventional computers and ANNs, as well as discussions on various types of neurons, learning algorithms, and network structures. Additionally, it addresses advanced topics such as competitive learning networks, self-organizing networks, and the significance of generalization in neural networks.

Uploaded by

R GAYATHRI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views5 pages

Ann QB

The document outlines a comprehensive curriculum on Artificial Neural Networks (ANNs), covering definitions, classifications, architectures, learning laws, and applications. It includes detailed comparisons between conventional computers and ANNs, as well as discussions on various types of neurons, learning algorithms, and network structures. Additionally, it addresses advanced topics such as competitive learning networks, self-organizing networks, and the significance of generalization in neural networks.

Uploaded by

R GAYATHRI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 5

UNIT-I

PART-A

1. Define ANN and Neural computing.


2. Distinguish between Supervised and Unsupervised Learning.
3. Draw the basic topologies for (a) Nonrecurrent and (b) Recurrent Networks and distinguish between
them.
4. Give some examples for Nonrecurrent and Recurrent ANNs. Specify the learning law used by each
ANN.
5. Define Adaptive System and Generalization.
6. Mention the characteristics of problems suitable for ANNs.
7. List some applications of ANNs.
8. What are the design parameters of ANN?
9. Explain the three classifications of ANNs based on their functions. Explain them in brief.
10. Define Learning and Learning Law.
11. Distinguish between Learning and Training.
12. How can you measure the similarity of two patterns in the input space?
13. A two layer network is to have four inputs and six outputs. The range of the outputs is to be
continuous between 0 and 1. What can you tall about the network architecture? Specifically,
(a) How many neurons are required in each layer?
(b) What are the dimensions of the first-layer and second layer weight
matrices? (Hidden layer neurons are 5)
(c) What kinds of transfer functions can be used in each layer?
14. Mention the linear and nonlinear activation functions used in Artificial Neural Networks.

PART-B

1. Write the differences between conventional computers and ANN.


2. Explain in Detail how weights are adjusted in the different types of Learning Law.(Both supervised and
Unsupervised)
3. Write short notes on the following.
a. Learning Rate Parameter
b. Momentum
c. Stability
d. Convergence
e. Generalization
4. (a) Write the advantages and disadvantages of Artificial Neural Networks.
(b) What are the design steps to be followed for using ANN for your problem?
5. (a) What are the relevant computational properties of the Human Brain?
(b) Write short notes on neural approaches to computation.
UNIT-II
PART-A
1. Compare physical neuron and artificial neuron
2. What is called weight or connection strength?
3. Draw the model of a single artificial neuron and derive its output.
4. Draw the model of MP(McCulloch Pitts) neuron and state its characteristics.
5. What are the two approaches to add a bias input?
6. Distinguish between linearly separable and nonlinearly separable problems. Give examples.
7. Define Perceptron convergence theorem.
8. What is XOR problem?
9. What is perceptron? Write the differences between Single Layer Perceptron(SLP) and Multilayer
Perceptron(MLP).
10. Define minimum disturbance principle.
11. Consider a 4 input, 1 output parity detector. The output is 1 if the number of inputs is even.
Otherwise, it is 0. Is this problem linearly separable? Justify your answer.
12. What is a-LMS algorithm?
13. Draw the ADALINE implementation for AND and OR functions.

PART-B
1. Draw the structure of a biological Neuron and explain in detail.
2. (a) Explain the three basic neurons which are used to develop complex ANN.
(b) Write the differences between MP neuron and WLIC-T and
Perceptron.
3. (a) Write short notes on
i. Sigmoid Squashing Function
ii. Extensions to sigmoid
(b)Develop simple ANNs to implement the three input AND, OR and XOR functions using MP neurons.
4. State and Prove Perceptron Convergence theorem.
5. (a) Draw the architecture of a single layer perceptron (SLP) and explain its operation. Mention its
advantages and disadvantages.
(b) Draw the architecture of a Multilayer perceptron (MLP) and explain its operation. Mention its
advantages and disadvantages.
6. Explain Why XOR problem can not be solved by a single layer perceptron and how it is solved by a
Multilayer Perceptron.
7. Explain ADALINE and MADALINE. List some applications.
8. (a) Distinguish between Perceptron Learning law and LMS Learning law.
(b) Give the output of the network given below for the input [1 1 1]T
9. (a) Explain the logic functions performed by the following networks with MP neurons given below.
(b) Design ANN using MP neurons to realize the following logic functions
using ±1 for the weights.
s(a1,a2,a3) =
s(a1,a2,a3) =
UNIT-III
PART-A
1. What is meant by mapping problem and mapping network?
2. What is a linear associative network?
3. Distinguish between nearest neighbor recall and interpolative recall.
4. Mention the desirable properties of Pattern Associator.
5. Distinguish between auto correlator and hetero correlator structures.
6. Define Hebbian Synapse.
7. List some issues that we have to consider to design a feed forward net
for a specific application.
8. Draw the overall feed forward net based strategy (implementation and training).
9. List the role of hidden layers in a Multilayer FeedForward network.
10. What is GDR? Write the weight update equations for hidden layer and
output layer weights.
11. Draw the flow chart of overall GDR procedure.
12. Draw the architecture of layered feedforward architecture.
13. Draw the feedforward architecture for ANN based compressor.
14. Distinguish between Pattern Mode and Batch Mode.
15. What is local minimum and global minimum?
16. Explain how the network training time and accuracy is influenced by the size of the hidden layer.
17. List out some applications of BPN.
18. What are the two types of signals identified in the BackPropagation network?
19. Why the layers in the Bidirectional Associative Memory are called x and y layers?

PART-B
1. Explain Hebbian Learning.
2. Draw the architecture of Back Propagation Network (BPN) and
explain in detail.
3. Derive GDR for a MLFF network
4. (a) Explain the significance of adding momentum to the training procedure.
(b) Write the algorithm of generalized delta rule(Back Propagation Algorithm).
5. Draw the architecture of Bidirectional Associative memory(BAM) and
explain in detail.
UNIT-IV
PART-A
1. What do you mean by Weight Space in Feedforward Neural Networks?
2. How can you perform search over weight space?
3. How will you determine the characteristics of a training algorithm?
4. What are the effects of error surface on training algorithms?
5. What is premature saturation in the error surface?
6. What is saddle point in the error surface?
7. What are the two types of transformations which results in symmetries in weight spaces? Explain in
brief.
8. What is meant by generalization?
9. What are Ontogenic Neural Networks? Mention their advantages.
10. Distinguish between constructive and destructive methods for network
topology modification.
11. Write the differences between Cascade Correlation(CC) network and
Layered Feedforward network.
12. Write the quickprop weight correction algorithm for Cascade
Correlation Network.
13. Define residual output error.
14. Define pruning.
15. Write the applications of Cascade Correlation network.
16. How will you identify superfluous neurons in the hidden layer?
17. What do you mean by network inversion?
18. Write the differences between HeteroAssociative Memories and interpolative associative memories.
19. Write the differences between Autossociative and HeteroAssociative
memories.

PART-B
1. Explain Generalization.
2. What are the major features of Cascade Correlation Network? Draw the
architecture of a cascade correlation network and explain in detail.
3. Explain how a feedforward network size can be minimized.
4. Explain the stochastic optimization methods for weight determination.
5. (a)Explain the methods for network topology determination.
(b) What are the costs involved in weights and explain how it is minimized?
6. Draw the architecture of Cascade Correlation Network and explain in detail.
7. Explain the method pruning by weight decay to minimize the neural network size.
8. Explain in detail how the superfluous neurons are determined and the
network is pruned.
UNIT-V
PART-A
1. What is competitive learning network? Give examples.
2. What is Self-Organizing network? Give examples.
3. Define the term clustering in ANN.
4. What is c-means algorithm?
5. How will you measure the clustering similarity?
6. What is on-centre off surround technique?
7. Describe the feature of ART network.
8. Write the differences between ART 1 and ART 2.
9. What is meant by stability plasticity dilemma in ART network?
10. What is 2/3rd rule in ART?
11. What are the two subsystems in ART network?
12. What are the applications of ART?
13. What are the two processes involved in RBF network design?
14. List some applications of RBF network.
15. What are the basic computational needs for Hardware implementation
of ANN?

PART-B
1. Explain the architecture and components of Competitive Learning
Neural Network with neat diagram.
2. Explain the clustering method Learning Vector Quantization.
3. Draw the architecture of SOM and explain in detail.
4. Explain the SOM algorithm.
5. Draw the architecture of ART1 network and explain in detail.
6. Explain ART1 algorithm.
7. Draw the architecture of RBF network and explain in detail.
8. Explain Time Delay Neural Network.

You might also like