0% found this document useful (0 votes)
573 views18 pages

VTU OLD QP@AzDOCUMENTS - in

This document contains previous question papers from VTU for the subject Artificial Intelligence & Machine Learning. It includes questions from 2018 to 2021 related to various topics in AI and machine learning like search algorithms, knowledge representation, logic, production systems, machine learning concepts etc. The questions are in the form of explanations to be provided, problems to be solved, algorithms to be written and applications of different AI techniques.

Uploaded by

Bhavana Nagaraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
573 views18 pages

VTU OLD QP@AzDOCUMENTS - in

This document contains previous question papers from VTU for the subject Artificial Intelligence & Machine Learning. It includes questions from 2018 to 2021 related to various topics in AI and machine learning like search algorithms, knowledge representation, logic, production systems, machine learning concepts etc. The questions are in the form of explanations to be provided, problems to be solved, algorithms to be written and applications of different AI techniques.

Uploaded by

Bhavana Nagaraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

AI PART
MODULE 1
1. Explain different characteristics of the AI problem used for analyzing it to choose
most appropriate method. (8M) (July 2018)
2. A water jug problem states “you are provided with two jugs, first one with 4-gallon
capacity and the second one with 3-gallon capacity. Neither have any measuring
markers on it.” How can you get exactly 2 gallons of water into 4-gallon jug?
a. Write down the production rules for the above problem.
b. Write any one solution to the above problem. (8M) (July 2018)
3. Explain the Best First Search algorithm with an example. (6M) (July 2018)
4. List various task domains of AI. (4M) (July 2018)
5. Explain how AND-OR graphs are used in problem reduction. (6M) (July 2018)
6. Define artificial intelligence and list the task domains of artificial intelligence. (6M)
(Jan 2019)
7. State and explain algorithm for Best Forst Search with an example. (6M) (Jan 2019)
8. Explain production system. (4M) (Jan 2019)
9. Write a note on water jug problem using production rules. (8M) (Jan 2019)
10. Explain simulated annealing. (4M) (Jan 2019)
11. Explain problem reduction with respect to AND-OD graphs. (4M) (Jan 2019)
12. What is AI technique? List less desirable properties and representation of knowledge.
(8M) (July 2019)
13. Explain production system with components and characteristics. List the
requirements for good control strategies. (8M) (July 2019)
14. List and explain the AI problem characteristics. (8M) (July 2019)
15. Explain constraint satisfaction and solve the cryptarithmetic problem:
CROSS + ROADS = DANGER. (8M) (July 2019)
16. Define artificial intelligence. Classify the task domains of artificial intelligence. (4M)
(Sept 2020)
17. List the properties of knowledge. (4M) (Sept 2020)
18. Discuss the production rules for solving the water-jug problem. (8M) (Sept 2020)
19. Briefly discuss any four problems characteristics. (6M) (Sept 2020)
20. Write an algorithm for
a. Steepest-Ascent hill climbing with example.
b. Best-First Search with example. (10M) (Sept 2020)
21. Solve the following cryptarithmetic problem DONALD + GERALD = ROBERT.
(10M) (Sept 2020)
22. Develop AO* algorithm for AI applications. (10M) (Sept 2020)
23. Solve water jug problem using production rule system. (10M) (Sept 2020)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 1 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

24. What is an AI technique? Explain in terms of knowledge representation. (5M) (Feb


2021)
25. Distinguish Breadth First Search and Depth First Search. (6M) (Feb 2021)
26. Write an algorithm for simple Hill Climbing. (5M) (Feb 2021)
27. On what dimensions problems are analyzed? (8M) (Feb 2021)
28. Mention issues in design of search problem. (3M) (Feb 2021)
29. Write a note on constraint satisfaction. (5M) (Feb 2021)
30. Explain and illustrate unification algorithm. (6M) (Feb 2021)
31. What are the properties of a good system for the representation of knowledge? (4M)
(Feb 2021)
32. Discuss how forward reasoning is different from backward reasoning. (6M) (Feb
2021)
33. With an illustration explain the process of converting well-formed formulas to clause
form. (8M) (Feb 2021)
34. Write a note on:
a. Conflict resolution
b. Logic programming.
35. Define artificial intelligence. Describe the four categories under which AI is classified
with. (6M) (Feb 2021)
36. Describe Briefly the various problem characteristics. (7M) (Feb 2021)
37. Describe the process of simulated annealing with an example. (7M) (Feb 2021)
38. List and explain various task domains of AI. (6M) (Feb 2021)
39. Discuss A* and A0* algorithm and the various observations about algorithm briefly.
(7M) (Feb 2021)
40. Explain in detail about the means–end analysis procedure with example. (7M) (Feb
2021)
41. What is Artificial Intelligence? Explain. (6M) (July 2021)
42. A water jug problem: Two Jugs of 4L and 3L capacity (No marker on it). How can
you get exactly 2L of water into 4L jug? Write both production rule and solution.
(10M) (July 2021)
43. What is meant by uniformed search? Explain Depth-first-search strategy. (4M) (July
2021)
44. What is an AI technique? Explain. (6M) (July 2021)
45. Write a note on Production System. (6M) (July 2021)
46. Crypt arithmetic problem: SEND + MORE = MONEY. Initial state: No two letters
have same value. Sum of digits must be shown. (8M) (July 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 2 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

MODULE 2
1. Consider the following set of well-formed formulas in predicate logic:

Convert these into clause form and prove that hate (Marcus, Caeser) using resolution
proof. (10M) (July 2018)
2. What is ‘matching’ in rule-based system? Briefly explain different proposals for
matching. (6M) (July 2018)
3. What are properties of good system for the representation of knowledge? Explain
different approaches to knowledge representation. (6M) (July 2018)
4. Distinguish forward and backward reasoning. Explain with example. (6M) (July
2018)
5. List the issues in knowledge representation. (4M) (July 2018)
6. Explain the approached to knowledge representation. (20M) (Jan 2019)
7. Write a note on control knowledge. (6M) (Jan 2019)
8. State the algorithm to Unify(L1,L2). (6M) (Jan 2019)
9. Write the algorithm for conversion to clause form. (10M) (Jan 2019)
10. List and explain the issues in knowledge representation. (8M) (July 2019)
11. State and explain the algorithm to convert predicates to clause form. (8M) (July 2019)
12. Consider the following predicates:

Prove that: ~alive(Marcus,now) (10M) (July 2019)


13. What is matching in rule-based system? Briefly explain the different proposals for
matching. (6M) (July 2019)
14. Discuss any two approaches of knowledge representation. (8M) (Sept 2020)
15. Consider the following sentences:
i). John likes all kinds of food.
ii). Apples are food
iii). Chicken is food

Jayashree N, Dept. of ISE, CiTech, Bangalore. 3 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

iv). Anything anyone eats and isn’t killed by is food


v). Bill eats peanuts and is still alive
vi). She eats everything Bill eats

Translate these sentences into formulas in predicate logic. (8M) (Sept 2020)

16. In brief, discuss forward and backward reasoning. (10M) (Sept 2020)
17. Write a resolution algorithm for predicate logic. (6M) (Sept 2020)
18. Consider the following set of well-formed formulas in predicate logic:

(10M) (Sept 2020)


19. Write the propositional resolution algorithm. (10M) (Sept 2020)
20. Write the algorithm for conversion to clause form. (10M) (Sept 2020)
21. Distinguish forward and backward reasoning with an example. (10M) (Sept 2020)
22. Discuss resolution in brief with an example. (6M) (Feb 2021)
23. Write the algorithm to unify (L1, L2). (7M) (Feb 2021)
24. Describe the issues in knowledge representation. (7M) (Feb 2021)
25. Discuss resolution in brief with an example. (6M) (Feb 2021)
26. Illustrate in detail about forward and backward reasoning with example. (7M) (Feb
2021)
27. What is “matching” in rule-based system? Briefly explain different proposals for
matching. (7M) (Feb 2021)
28. Explain mapping between Facts and representation with example. (5M) (July 2021)
29. Explain Forward and Backward reasoning. (5M) (July 2021)
30. Translate following into First Order Logic:
(i) All pompeins were Romans.
(ii) All Romans are either loyal to Caesar or hated him.
(iii) Everyone is loyal to someone.
(iv) Was Macrus loyal to Caesar?
(v) All pompeins died when the voleano errupted in 79AD (10M) (July 2021)
31. Explain Inheritable knowledge. (6M) (July 2021)
32. Consider following sentences:
(i) John likes all kind of food.
(ii) Apple and Chicken are food
(iii) Anything anyone eats and is not killed by is food.
(iv) Bill eats peanuts and is still alive.
(v) Sue eats everything bill eats.

Jayashree N, Dept. of ISE, CiTech, Bangalore. 4 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

Using resolution prove that “John likes Peanuts”. (10M) (July 2021)

33. Write a note on Matching. (4M) (July 2021)

ML PART
MODULE 2
INTRODUCTION, CONCEPT LEARNING
1. Specify the learning task for “A Checkers learning problem”. (3M)(JAN 19)
2. Discuss the following with respect to the above
a. Choosing the training experience
b. Choosing the target function
c. Choosing a function approximation algorithm. (9M)(JAN 19)
3. Comment on the issue in machine learning (4M)(JAN 19)
4. Write candidate elimination algorithm. Apply the algorithm to obtain the final
version space for the training example,
Sl. Sky AirTemp Humidity Wind Water Forecast EnjoySport
No.
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cool High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
(10M) (JAN 19)
5. Discuss about an unbiased learner.(6M) (JAN 19)
6. Define machine learning. Describe the steps in designing learning system. (8M)(JULY
19)
7. Write Find-S algorithm and explain with example. (4m) (JULY 19)
8. Explain List-then-eliminate algorithm. (4M) (JULY 19)
9. List out any 5 applications of machine learning. (5M) (JULY 19)
10. What do you mean by hypothesis space, instance space, and version space? (3M)
(JULY 19)
11. Find the maximally general hypothesis and maximally specific hypothesis for the
training examples given in the table using candidate elimination algorithm.
Sl. Sky AirTemp Humidity Wind Water Forecast EnjoySport
No.
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cool High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
(08M) (JULY 19)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 5 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

12. What do you mean by well-posed learning problem? Explain with example. (4M)
(JAN 2020)
13. Explain the various stages involved in designing a learning system in brief. (8M) (JAN
2020)
14. Write the Find_S algorithm and discuss the issues with the algorithm. (4M) (JAN
2020)
15. List the issues in machine learning. (4M) (JAN 2020)
16. Consider the given below training example which finds malignant tumers from MRI
scans.
Example Shape Size Color Surface Thickness Target_Concept
1 Circular Large Light Smooth Thick Malignant
2 Circular Large Light Irregular Thick Malignant
3 Oval Large Dark Smooth Thin Benign
4 Oval Large Light Irregular Thick Malignant
5 Circular Small Light Smooth Thick Benign
(8M) (JAN 2020)
17. Explain the concept of inductive bias in brief. (4M) (JAN 2020)
18. What is machine learning? Explain different perspectives and issues in machine
learning. (6M) (SEP 2020)
19. Explain the steps in designing the learning system. (10M) (SEP 2020)
20. Describe the Candidate Elimination algorithm. Explain its working, taking the enjoy
sport concept and training instances given below.
Sl. Sky AirTemp Humidity Wind Water Forecast EnjoySport
No.
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cool High Strong Warm Change No
4 Sunny Warm High Strong Warm Change Yes
(6M) (SEP 2020)
21. Define Machine Learning. Explain with specific examples. (6M) (Feb 2021)
22. How will you design a learning system? Explain with examples. (6M) (Feb 2021)
23. List and explain perspectives and issues in perspective learning. (4M) (Feb 2021)
24. Define concept learning. Explain the task of concept learning. (6M) (Feb 2021)
25. How the concept learning can be viewed as the task of searching? Explain. (4M) (Feb
2021)
26. Explain with examples:
a. Find-S algorithm
b. Candidate elimination algorithm. (6M) (Feb 2021)
27. Define machine learning. Mention 5 applications of machine learning. (6M) (Feb
2021)
28. Explain concept learning task with an example. (6M) (Feb 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 6 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

29. Apply candidate elimination algorithm and obtain the version space considering the
training examples given in Table Q1 (c).
Table Q1 (c)
Eyes Nose Head FColor Hair? Smile? (TC)
Round Triangle Round Purple Yes Yes
Square Square Square Green Yes No
Square Triangle Round Yellow Yes Yes
Round Triangle Round Green No No
Square Square Round Yellow Yes Yes

(8M) (Feb 2021)


30. Explain the following with respect to designing a learning system:
a. Choosing the training experience
b. Choosing the target function
c. Choosing a representation for the target function. (9M) (Feb 2021)
31. Write Find-S algorithm. Apply the Find-S for Table Q1 (c) to find maximally specific
hypothesis. (6M) (Feb 2021)
32. Explain the concept of inductive bias. (5M) (Feb 2021)
33. Explain the designing of a learning system in detail. (10M) (Jul 2021)
34. Define learning. Specify the learning problem in handwriting recognition and robot
driving. (5M) (Jul 2021)
35. Explain the issues in machine learning. (5M) (Jul 2021)
36. Write the steps involved in find-S algorithm. (5M) (Jul 2021)
37. Apply candidate elimination algorithm to obtain final version space for the training
set shown in Table Q2 (b) to infer which books or articles the user reads based on
keywords supplied in the article.
Article Crime Academes Local Music Reads
a1 True False False True True
a2 True False False False True
a3 False True False False False
a4 False False True False False
a5 True True False False True
(10M) (Jul 2021)
38. State the inductive bias rote-learner, candidate elimination and Find-S algorithm.
(5M) (Jul 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 7 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

MODULE 3
DECISION TREE LEARNING
ARTIFICIAL NEURAL NETWORKS
Decision Tree Learning

1. What is decision tree? Discuss the use of decision tree for classification purpose with
an example. (8M) (JAN 19)
2. Write and explain the decision tree for the following transactions.
Tid Refund MaritalStatus TaxableIncome Cheat
1 Yes Single 125k No
2 No Married 100k No
3 No Single 70k No
4 Yes Married 120k No
5 No Divorced 95k Yes
6 No Married 60k No
7 Yes Divorced 220k No
8 No Single 85k Yes
9 No Married 75k No
10 No Single 90k Yes
(8M) (JAN 19)

3. For the transactions shown in the table, compute the following:


a. Entropy of the collection of transaction records of the table with respect to
classification
b. What are the information gain of a1 and a2 relative to the transaction of he
table?
Instance 1 2 3 4 5 6 7 8 9
a1 T T T F F F F T F
a2 T T F F T T F F T
TargetClass + + - + - - - + -
(8M) (JAN 19)
4. Discuss the decision learning algorithm. (4M) (JAN 19)
5. List the issues of decision tree learning. (4M) (JAN 19)
6. Construct the decision tree for the following data using ID3 algorithm.
Day A1 A2 A3 Classification
1 True Hot High No
2 True Hot High No
3 False Hot High Yes
4 False Cool Normal Yes

Jayashree N, Dept. of ISE, CiTech, Bangalore. 8 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

5 False Cool Normal Yes


6 True Cool High No
7 True Hot High No
8 True Hot Normal Yes
9 False Cool Normal Yes
10 False Cool High No
(16M)(JULY 19)
7. Explain the concept of decision tree learning. Discuss the necessary measures
required to select the attributes for building a decision tree using ID3 algorithm.
(8M)JULY 19)
8. Discuss the issues of avoiding overfitting the data, handling continuous data and
missing values in decision tree. (8M)(JULY 19)
9. Discuss the two approaches to prevent over fitting the data. (8M) (JAN 2020)
10. Consider the following set of training examples:
instance Classification a1 a2
1 1 1 1
2 1 1 1
3 0 1 0
4 1 0 0
5 0 0 1
6 0 0 1
i. What is the entropy of this collection of training examples with respect to the
target function classification?
ii. What is the information gain of a2 relative to these training examples?
(8M) (JAN 2020)
11. Define decision tree. Construct the decision tree to represent the following Boolean
functions:
i) A˄¬B ii) A˅[B˄C] iii) A XOR B
(6M) (JAN 2020)
12. Write the ID3 algorithm. (6M) (JAN 2020)
13. What do you mean by gain and entropy? How is it used to build the decision tree?
(4M) (JAN 2020)
14. Explain the concept of entropy and information gain. (6M) (SEP 2020)
15. Describe the ID3 algorithm for decision tree learning. (10M) (SEP 2020)
16. Apply ID3 algorithm for constructing decision tree for the following training
example. (10M) (SEP 2020)
Day Outlook Temperature Humidity Wind PlayTennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes

Jayashree N, Dept. of ISE, CiTech, Bangalore. 9 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

D6 Rain Cool Normal Strong No


D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High strong No
(10M) (SEP 2020)
17. Explain the issues in decision tree learning. (6M) (SEP 2020)
18. Define decision tree learning. List and explain appropriate problems for decision tree
learning. (6M) (Feb 2021)
19. Explain the basic decision tree learning algorithm. (5M) (Feb 2021)
20. Describe the hypothesis space search in decision tree learning. (5M) (Feb 2021)
21. Define inductive bias. Explain inductive bias in decision tree learning. (6M) (Feb
2021)
22. Give differences between the hypothesis space search in decision tree and candidate
elimination algorithm. (4M) (Feb 2021)
23. List and explain issues in decision tree learning. (6M) (Feb 2021)
24. Explain the concept of decision tree learning. Discuss the necessary measures
required to select the attribute for building a decision tree using ID3 algorithm. (11M)
(Feb 2021)
25. Explain the following with respect to decision tree learning.
a. Incorporating continuous valued attributes
b. Alternative measures for selecting attributes.
c. Handling training examples with missing attribute values. (9M) (Feb 2021)
26. Construct decision tree using ID3 considering the following training examples.
Parental
Weekend Weather Wealthy Decision Class
Availability
H1 Sunny Yes Rich Cinema
H2 Sunny No Rich Tennis
H3 Windy Yes Rich Cinema
H4 Rainy Yes Poor Cinema
H5 Rainy No Rich Home
H6 Rainy Yes Poor Cinema
H7 Windy No Poor Cinema
H8 Windy No Rich Shopping
H9 Windy Yes Rich Cinema
H10 Sunny No Rich Tennis

(12M) (Feb 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 10 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

27. Discuss the issues of avoiding the overfitting the data, and handling attributes with
different costs. (8M) (Feb 2021)
28. Define the following terms with an example for each:
a. Decision tree
b. Entropy
c. Information gain
d. Restriction bias
e. Preference bias (10M) (Jul 2021)
29. Construct decision tree for the data set shown in Table Q3(b) to find whether a seed
is poisonous or not.
Example Color Toughness Fungus Appearance Poisonous
1 Green Soft Yes Wrinkled Yes
2 Green Hard Yes Smooth No
3 Brown Soft No Wrinkled No
4 Brown Soft Yes Wrinkled Yes
5 Green Soft Yes Smooth Yes
6 Green Hard No Wrinkled No
7 Orange Soft Yes Wrinkled Yes
(10M) (Jul 2021)
30. Explain ID3 algorithm. Give an example. (10M) (Jul 2021)
31. Explain the issues and solutions to those issues in decision tree learning. (10M) (Jul
2021)

Artificial Neural Networks

1. Draw the perceptron network with the notation. Derive an equation of gradient
descent rule to minimize the error. (8M) (JAN 19)
2. Explain the importance of the terms: (i) Hidden Layer (ii) Generalization (iii)
Overfitting (iv) Stopping criterion (8M) (JAN 19)
3. Discuss the application of neural network which is used for learning to steer an
autonomous vehicle. (6M) (JAN 19)
4. Write an algorithm for BACKPROPAGATION which uses stochastic gradient
descent method. comment on the effect of adding momentum to the network.(10M)
(JAN 19)
5. Explain artificial neural network based on perception concept with diagram
(6M)(JULY 19)
6. What is gradient descent and delta rule? Why stochastic approximation to gradient
descent if needed?(4M)(JULY 19)
7. Describe the multilayer neural network. Explain why BACKPROPAGATION
algorithm is required. (6M)(JULY 19)
8. Derive the BACKPROPAGATION rule considering the output layer and training
rule for output unit weights. (8M)(JULY 19)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 11 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

9. What is squashing function? Why is it needed? (4M)(JULY 19)


10. List out and explain in briefly representation power of feed forward networks.
(4M)(JULY 19)
11. Define perceptron. Explain the concept of single perceptron with neat diagram. (6M)
(JAN 2020)
12. Explain the BACKPOPAGATION algorithm. Why is it not likely to be trapped in
local minima? (10M) (JAN 2020)
13. List the appropriate problems for neural network learning. (4M) (JAN 2020)
14. Discuss the perceptron training rule and delta rule that solves the learning problem
of perceptron. (8M) (JAN 2020)
15. Write a remark on representation of feed forward networks. (4M) (JAN 2020)
16. Explain appropriate problems for Neural Network Learning with its characteristics.
(10M) (SEP 2020)
17. Explain the single perceptron with its learning algorithm. (6M) (SEP 2020)
18. Explain BACKPROPAGATION algorithm. (10M) (SEP 2020)
19. Explain the remarks on BACKPROPAGATION algorithms. (6M) (SEP 2020)
20. Define artificial neural networks. Explain biological learning systems. (5M) (Feb
2021)
21. Explain representation of neural networks. (5M) (Feb 2021)
22. Describe the characteristics of backpropagation algorithm. (6M) (Feb 2021)
23. Define perceptron. Explain representational power of perceptron. (5M) (Feb 2021)
24. Explain gradient descent algorithm. (6M) (Feb 2021)
25. Describe derivation of the backpropagation rule. (5M) (Feb 2021)
26. Discuss the application of neural network which is used to steer an autonomous
vehicle. (6M) (Feb 2021)
27. Write Gradient Descent algorithm to train a linear unit along with the derivation.
(8M) (Feb 2021)
28. Discuss the issues of convergence, local minima and generalization, overfitting and
stopping criteria. (6M) (Feb 2021)
29. List the appropriate problems for neural network learning. (5M) (Feb 2021)
30. Define perceptron and discuss its training rule. (5M) (Feb 2021)
31. Show the derivation of backpropagation training rule for output unit weights. (10M)
(Feb 2021)
32. Derive an expression for gradient descent rule to minimize the error. Using the same,
write the gradient descent algorithm for training a linear unit. (10M) (Jul 2021)
33. Write backpropagation algorithm that uses stochastic gradient decent method. What
is the effect of adding momentum to the network? (10M) (Jul 2021)
34. List the characteristics of the problems which can be solved using backpropagation
algorithm. (5M) (Jul 2021)
35. Design a perceptron to implement two input AND function. (5M) (Jul 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 12 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

36. Derive expressions for training rule of output and hidden unit weights for back
propagation algorithm. (10M) (Jul 2021)

MODULE 4
BAYESIAN LEARNING
1. What is Bayes theorem and maximum posterior hypothesis. (4M) (JAN 19)
2. Derive an equation for MAP hypothesis using Bayes theorem. (4M) (JAN 19)
3. Consider a football team between two rival teams: Team 0 and Team 1. Suppose
Team 0 wins 95% of the time and Team 1 wins the remaining matches. Among the
games won by Team 0, only 30% of them came from plating on Team 1’s field. On
the other hand, 75% of the victories for Team 1 are obtained while playing at home.
If Team 1 is to host the next match between the two teams, which team will most likely
emerge as the winner? (8M) (JAN 19)
4. Describe Brute Force learning algorithm.( 4M)(JAN 19)
5. Discuss the Naïve Bayes Classifier. (4M)(JAN 19)
6. The following table gives data set about stolen vehicles. Using Naïve Bayes classifier
classify the new data (Red, SUV, Domestic).
Color Type Origin Stolen
Red Sports Domestic Yes
Red Sports Domestic No
Red Sports Domestic Yes
Yellow Sports Domestic No
Yellow Sports Imported Yes
Yellow SUV Imported No
Yellow SUV Imported Yes
Yellow SUV Domestic No
Red SUV Imported No
Red Sports Imported Yes
(8M)(JAN 19)
7. Explain Maximum-a-posterior (MAP) hypothesis using Bayes theorem. (6M)(JULY
19)
8. Estimate conditional probabilities of each attributes {color, legs, height, smelly} for
the species classes: {M, H} using the data given in the table. Using these probabilities
estimate the probability values for the new instance- (color=Green, legs=2, height
=Tall, and smelly=No)
No. Color Legs Height Smelly Species
1 White 3 Short Yes M
2 Green 2 Tall No M
3 Green 3 Short Yes M
4 White 3 Short Yes M

Jayashree N, Dept. of ISE, CiTech, Bangalore. 13 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

5 Green 2 Short No H
6 White 2 Tall No H
7 White 2 Tall No H
8 White 2 Short Yes H
(10M)(JULY 19)
9. Explain Naïve Bayes Classifier and Bayesian Belief Networks. (10M)(JULY 19)
10. Prove that how maximum likelihood (Bayesian learning) can be used in any learning
algorithms that are used to minimize the squared error between actual output
hypotheses and predicted output hypothesis. (6M)(JULY 19)
11. Explain Naïve Bayes Classifier. (8M) (JAN 2020)
12. Explain Brute force MAP learning algorithm. (8M) (JAN 2020)
13. Discuss Minimum Description Length principle in brief. (8M) (JAN 2020)
14. Explain Bayesian Belief Networks and conditional independence with example. (8M)
(JAN 2020)
15. Explain Naïve Bayes Classifier. (10M) (SEP 2020)
16. Explain Bayesian Belief Networks. (6M) (SEP 2020)
17. Explain EM algorithm. (8M) (SEP 2020)
18. Explain the derivation of K-Means algorithm. (8M) (SEP 2020)
19. List and explain features of Bayesian learning methods. (6M) (Feb 2021)
20. Explain Brute-Force MAP learning algorithm. (5M) (Feb 2021)
21. Explain Maximum Likelihood and least-squared error hypothesis. (5M) (Feb 2021)
22. Describe maximum likelihood hypotheses for predicting probabilities. (5M) (Feb
2021)
23. Define Bayesian Belief networks. Explain with an example. (6M) (Feb 2021)
24. Explain EM algorithm. (5M) (Feb 2021)
25. Explain Bayes theorem and mention the features of Bayesian learning. (7M) (Feb
2021)
26. Prove that a Maximum likelihood hypothesis can be used to predict probabilities.
(8M) (Feb 2021)
27. Explain Naïve Bayes classifier. (6M) (Feb 2021)
28. Describe MAP learning algorithm. (8M) (Feb 2021)
29. Classify the test data and {Red, SUV, Domestic} using Naïve Bayes classifier for the
dataset shown in Table Q8 (b).
Table Q8(b)
Color Type Origin Stolen
Red Sports Domestic Yes
Red Sports Domestic No
Red Sports Domestic Yes
Yellow Sports Domestic No
Yellow Sports Imported Yes
Yellow SUV Imported No

Jayashree N, Dept. of ISE, CiTech, Bangalore. 14 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

Yellow SUV Imported Yes


Yellow SUV Domestic No
Red SUV Imported No
Red Sports Imported Yes
(6M) (Feb 2021)
30. Write and explain EM algorithm. (6M) (Feb 2021)
31. Define Maximum-a-Posteriori (MAP) hypothesis. Derive an equation for MAP
hypothesis using Bayes theorem. (4M) (Jul 2021)
32. Given P(A=True)=0.3, P(A=False)=0.7, P(B=True|A=True)=0.4,
P(B=False|A=True)=0.6, P(B=True|A=False)=0.6, P(B=False|A=False)=0.4.
Calculate P(A=False|B=False) using Bayes Rule. (6M) (Jul 2021)
33. Given a previous patient’s data in the Table, Use Naïve Bayes classifier to classify the
new data (Chills=Y, Runny_Nose=N, Headache=Mild, Fever=Y) to find whether the
patient has flue or not.
Chills Runny_Nose Headache Fever Flue
Y N Mild Y N
Y Y No N Y
Y N Strong Y Y
N Y Mild Y Y
N N No N N
N Y Strong Y Y
N Y Strong N N
Y Y Mild Y Y
(10M) (Jul 2021)
34. Describe the features of Bayesian learning methods. (5M) (Jul 2021)
35. A patient takes a lab test and the result comes back positive. It is known that the test
returns a correct positive result in only 98% of the cases and a correct-negative results
in only 97% of the cases. Furthermore only 0.008 of the entire population has this
disease.
a. What is the probability that this patient had cancer?
b. What is the probability that he does not have cancer? (5M) (Jul 2021)
36. The table below provides a set of 14 training examples of the target concept
“PlayTennis” where each day is described by the attributes.
Day Outlook Temperature Humidity Wind PlayTennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
Jayashree N, Dept. of ISE, CiTech, Bangalore. 15 2021-22
Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

D10 Rain Mild Normal Weak Yes


D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High strong No
Use the Naïve Bayes classifier and the training data from the table to classify the
following novel instance: <Outlook=Sunny, Temperature=Cool, Humidity=High,
Wind=Strong> (10M) (Jul 2021)

MODULE 5
EVALUATING HYPOTHESIS, INSTANCE-BASED
LEARNING, REINFORCEMENT LEARNING
1. Write short notes on the following:
a. Estimating hypothesis accuracy
b. Binomial distribution (8M)(JAN 19)
2. Discuss the method of comparing two algorithms. Justify with paired t tests methods.
(8M) (JAN 19)
3. Discuss the k-nearest neighbor algorithm. (4M)(JAN 19)
4. Discuss locally weighted regression. (4M)(JAN 19)
5. Discuss the learning tasks and Q learning in the context of reinforcement learning.
(8M)(JAN 19).
6. Explain locally weighted regression linear regression. (8M)(JULY 19)
7. What do you mean by reinforcement learning? How reinforcement learning problem
differs from other function approximation tasks. (5M)(JULY 19)
8. Write down Q-learning algorithm. (3M)(JULY 19).
9. What is instance-based learning? Explain k-nearest neighbor learning? (8M)(JULY
19)
10. Explain sample error, true error, confidence intervals and Q-learning function.
(8M)(JULY 19).
11. Define: (i) Simple Error (ii) True Error. (4M) (JAN 2020)
12. Explain k-Nearest Neighbor learning problem. (8M) (JAN 2020)
13. What is reinforcement learning? (4M) (JAN 2020)
14. Define expected value, variance, standard deviation, and estimate bias of a random
variable. (4M) (JAN 2020)
15. Explain locally weighted linear regression. (89M) (JAN 2020)
16. Write a note on Q-Learning. (4M) (JAN 2020)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 16 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

17. Explain k-Nearest Neighbor learning algorithm with example. (10M) (SEP 2020)
18. Explain case-based reasoning with example. (6M) (SEP 2020)
19. Write short note on:
a. Q-Learning
b. Radial Basis function
c. Locally Weighted Regression
d. Sampling Theory
(20M) (SEP 2020)
20. Define the following with examples:
a. Sample error
b. True error
c. Mean
d. Variance (8M) (Feb 2021)
21. Explain central limit theorem. (4M) (Feb 2021)
22. Explain K-nearest neighbor algorithm. (4M) (Feb 2021)
23. Explain case-based reasoning. (6M) (Feb 2021)
24. List and explain important differences of reinforcement algorithm with other
function approximation tasks. (4M) (Feb 2021)
25. Explain Q-learning algorithm. (6M) (Feb 2021)
26. Define
a. Sample error
b. True error
c. Confidence intervals (6M) (Feb 2021)
27. Explain K-nearest neighbor learning algorithm. (8M) (Feb 2021)
28. Write a note on Q-learning. (8M) (Feb 2021)
29. Define mean value, variance, standard deviation and estimation bias of a random
variable. (4M) (Feb 2021)
30. Explain locally weighted linear regression and radial basis function. (10M) (Feb 2021)
31. What is reinforcement learning? How it differs from other function approximation
tasks. (6M) (Feb 2021)
32. Explain binomial distribution and write the expressions for its probability
distribution, mean, variance and standard deviation. (4M) (Jul 2021)
33. Define the following terms:
a. Sample error
b. True error
c. N% confidence interval
d. Random variable
e. Expected value
f. Variance (6M) (Jul 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 17 2021-22


Artificial Intelligence & Machine Learning-18CS71 VTU Previous Question Papers

34. Write K-Nearest Neighbor algorithm for approximating a discrete values target
function. Apply the same for the following three-dimensional training data instances
along with one-dimensional output.
x1=5, x2=7, x3=3, y=4
x1=2, x2=4, x3=9, y=8
x1=3, x2=8, x3=1, y=2
x1=7, x2=7, x3=2, y=4
x1=1, x2=9, x3=7, y=8
Consider the query point (x1=5, x2=3, x3=4) and K=3. (10M) (Jul 2021)
35. List the steps used for deriving the confidence intervals. (4M) (Jul 2021)
36. Explain CADIT system using case-based reasoning. (6M) (Jul 2021)
37. Write Q-learning algorithm. Consider the following state s1. Find 𝑸 ̂ (𝒔𝟏 , 𝒂𝒓𝒊𝒈𝒉𝒕 ) for R
given immediate reward as 0 and γ=0.9.

(10M) (Jul 2021)

Jayashree N, Dept. of ISE, CiTech, Bangalore. 18 2021-22

You might also like