0% found this document useful (0 votes)
47 views2 pages

Aiml Ques

The document contains a series of questions and tasks related to machine learning concepts, including random forests, ensemble learning, activation functions, and neural networks. It covers various algorithms and techniques such as Gaussian mixture models, KNN, and dropout methods to avoid overfitting. Additionally, it includes practical exercises like illustrating algorithms and analyzing datasets for clustering and classification.

Uploaded by

shrihari.m2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views2 pages

Aiml Ques

The document contains a series of questions and tasks related to machine learning concepts, including random forests, ensemble learning, activation functions, and neural networks. It covers various algorithms and techniques such as Gaussian mixture models, KNN, and dropout methods to avoid overfitting. Additionally, it includes practical exercises like illustrating algorithms and analyzing datasets for clustering and classification.

Uploaded by

shrihari.m2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

2 Marks

1. Recall random forest.


2. Contrast ensemble learning and supervised learning
3. Relate how expectation maximization is used in Gaussian mixture model.
4. Draw the architecture of multilayer perceptron.
5. Name any two activation functions.
6. When does an algorithm become unstable?
7. Infer why is the smoothing parameter h need to be optimal?
8. Compare computer and human brain.
9. Show the perceptron that calculates parity of its three inputs.
10. Tell why is ReLU better than Softmax? Give the equation for both.
11. Infer the three types of ensemble learning.
12. Outline the significance of Gaussian Mixture Model.
13. Identify when an algorithm becomes unstable.
14. Relate the reason for smoothing parameter h needing to be optimal.
15. Outline the architecture of Multilayer Perceptron.
16. Compare why ReLU is better than Softmax and give the equation.
17. Interpret the usage of stochastic gradient method in training of neural networks.
18. Name any two activation functions.
19. Infer the reason to avoid overfitting.
20. Define Random Forest.
21. Name the methods to avoid overfitting.
22. Describe the applications of the Random Forest algorithm.
23. List the types of Ensemble Learning.
24. Define Voting.
25. Differentiate k in K-Means and KNN.
26. Summarize the Gaussian Mixture Model.
27. Tabulate the components of Multi-Layer Perceptron.
28. List the gradient descent optimization types.
29. Discuss the advantages of ReLU.
30. Show how dropout helps in overfitting.

16 Marks

1. Classify Ensemble Learning types and explain in detail.


2. Summarize the types of Activation Functions.
3. Illustrate Support Vector Machine in detail.
4. Illustrate KNN in detail.
5. Justify how Dropout technique helps in overfitting.
6. Outline Random Forest algorithm with an example.
7. Examine various learning techniques involved in Unsupervised Learning.
8. Categorize the steps involved in Expectation-Maximization algorithm.
9. Interpret the process of training hidden layers by ReLU in deep networks.
10. Assess the steps in the Back Propagation learning algorithm and give the importance of it
in designing neural networks.
11. Classify the general procedure of Random Forest algorithm.
12. Analyze the different procedures used in classification tree.
13. Construct a training dataset, By using it demonstrate the Ada Boost algorithm that
makes an ensemble classifier.
14. Examine the process of training hidden layers by ReLU in deep networks.

15. Inspect the steps in the back propogation learning algorithm. What is the importance
of it in designing neural networks.
16. Consider five points (x1,x2,x3,x4,x5) with the following coordinates as a two-
dimensional sample for clustering.
x1=(0.5,1.75), x2=(1,2), x3=(1.75,0.25), x4=(4,1), x5=(6,3).
Illustrate the k-means algorithm on the above data set. The required number of clusters is
two, and initially, clusters are formed from random distribution of samples:
C1={x1,x2,x4} and C2={x3,x5}.

You might also like