Important Questions Soft Computing (1)
Important Questions Soft Computing (1)
MST 1
Section B
MST 2
3. Identify few activations function which are used in single and multilayer networks to calculate the
output
4. Discuss the advantage of auto encoder over Principal component analysis for dimensionality
reduction
6Show the graphical representation of sigmoid function and differentiate of the sigmoid function and
comment of the result
7. A hetero associative network is trained by hebbs outer product rule for input vector
Unit 1 Questions
Explain the perceptron model and architecture in detail. Also write the algorithm of the
same
Unit 2 Question
1. What is generalized delta rule and the updating of hidden layer and output layer?
2. What are the various applications of neural networks?
3. State few activation functions which are used in single and multilayer network. 4.
4. What is a loss function in the context of neural networks?
5. Explain the concept of a cost function in machine learning, and how is it related to
the loss function?
6. Distinguish between Hopfield and iterative auto associative networks.
7. What are linearly separable problems?
8. Compare LSTM and gated recurrent units.
9. What is a Long Short-Term Memory (LSTM) network, and how does it address
the vanishing gradient problem?
10. Show the linearization of sigmoid function.
11. Compare and Contrast stateful and stateless LSTMs.
12. Define recurrent neural networks.
13. Define Associative memory.
14. Describe the significance of convolutional layer and pooling layer.
15. Describe Bidirectional associative memory.
16. Distinguish between recurrent and non-recurrent networks.
17. Explain Hopfield memory in brief.
18. Compare auto associative net and Hopfield net.
19. Explain self-organization in brief.
20. Distinguish between binary and bipolar sigmoid Function.
21. Write in brief about convolution layer.
22. Write some applications of CNN.
23. Illustrate denoising Auto encoders.
24. Write a short note on sparse auto encoders.
25. Explain Suitability of various activation functions with respect to applications.
26. Illustrate the operations of pooling layer in CNN with simple example.
27. Justify the advantages of auto encoders over principal component analysis for
dimensionality reduction.
28. Explain the working of gate recurrent unit.
29. Show graphical representation of sigmoid activation function.
30. Illustrate the significance of sigmoid activation function.
31. Graphically, sketch the different Activation functions used in NN.
32. Distinguish between Auto associative and heteroassociative Memory.
33. Describe rectified linear units and their generalized form.
34. Differentiate between Relu and Tanh Activation functions.
35. Explain the Algorithm of discrete Hopfield network and its Architecture.
36. Analyse the roll of rectified linear units in hidden layers.
37. Describe the characteristics of continuous Hopfield network.
38. Illustrate Encoder – Decoder sequence –to-sequence Architecture.
Q37. Find out the Net Input yin and final output Y of the network given below:
Q38. Calculate the net input for the network with bias included:
Q39 . Obtain the output of the neuron Y for the network using the activation function.
Solutions:
Unit 3 Questions