VTU OLD QP@AzDOCUMENTS - in
VTU OLD QP@AzDOCUMENTS - in
AI PART
MODULE 1
1. Explain different characteristics of the AI problem used for analyzing it to choose
most appropriate method. (8M) (July 2018)
2. A water jug problem states “you are provided with two jugs, first one with 4-gallon
capacity and the second one with 3-gallon capacity. Neither have any measuring
markers on it.” How can you get exactly 2 gallons of water into 4-gallon jug?
a. Write down the production rules for the above problem.
b. Write any one solution to the above problem. (8M) (July 2018)
3. Explain the Best First Search algorithm with an example. (6M) (July 2018)
4. List various task domains of AI. (4M) (July 2018)
5. Explain how AND-OR graphs are used in problem reduction. (6M) (July 2018)
6. Define artificial intelligence and list the task domains of artificial intelligence. (6M)
(Jan 2019)
7. State and explain algorithm for Best Forst Search with an example. (6M) (Jan 2019)
8. Explain production system. (4M) (Jan 2019)
9. Write a note on water jug problem using production rules. (8M) (Jan 2019)
10. Explain simulated annealing. (4M) (Jan 2019)
11. Explain problem reduction with respect to AND-OD graphs. (4M) (Jan 2019)
12. What is AI technique? List less desirable properties and representation of knowledge.
(8M) (July 2019)
13. Explain production system with components and characteristics. List the
requirements for good control strategies. (8M) (July 2019)
14. List and explain the AI problem characteristics. (8M) (July 2019)
15. Explain constraint satisfaction and solve the cryptarithmetic problem:
CROSS + ROADS = DANGER. (8M) (July 2019)
16. Define artificial intelligence. Classify the task domains of artificial intelligence. (4M)
(Sept 2020)
17. List the properties of knowledge. (4M) (Sept 2020)
18. Discuss the production rules for solving the water-jug problem. (8M) (Sept 2020)
19. Briefly discuss any four problems characteristics. (6M) (Sept 2020)
20. Write an algorithm for
a. Steepest-Ascent hill climbing with example.
b. Best-First Search with example. (10M) (Sept 2020)
21. Solve the following cryptarithmetic problem DONALD + GERALD = ROBERT.
(10M) (Sept 2020)
22. Develop AO* algorithm for AI applications. (10M) (Sept 2020)
23. Solve water jug problem using production rule system. (10M) (Sept 2020)
MODULE 2
1. Consider the following set of well-formed formulas in predicate logic:
Convert these into clause form and prove that hate (Marcus, Caeser) using resolution
proof. (10M) (July 2018)
2. What is ‘matching’ in rule-based system? Briefly explain different proposals for
matching. (6M) (July 2018)
3. What are properties of good system for the representation of knowledge? Explain
different approaches to knowledge representation. (6M) (July 2018)
4. Distinguish forward and backward reasoning. Explain with example. (6M) (July
2018)
5. List the issues in knowledge representation. (4M) (July 2018)
6. Explain the approached to knowledge representation. (20M) (Jan 2019)
7. Write a note on control knowledge. (6M) (Jan 2019)
8. State the algorithm to Unify(L1,L2). (6M) (Jan 2019)
9. Write the algorithm for conversion to clause form. (10M) (Jan 2019)
10. List and explain the issues in knowledge representation. (8M) (July 2019)
11. State and explain the algorithm to convert predicates to clause form. (8M) (July 2019)
12. Consider the following predicates:
Translate these sentences into formulas in predicate logic. (8M) (Sept 2020)
16. In brief, discuss forward and backward reasoning. (10M) (Sept 2020)
17. Write a resolution algorithm for predicate logic. (6M) (Sept 2020)
18. Consider the following set of well-formed formulas in predicate logic:
Using resolution prove that “John likes Peanuts”. (10M) (July 2021)
ML PART
MODULE 2
INTRODUCTION, CONCEPT LEARNING
1. Specify the learning task for “A Checkers learning problem”. (3M)(JAN 19)
2. Discuss the following with respect to the above
a. Choosing the training experience
b. Choosing the target function
c. Choosing a function approximation algorithm. (9M)(JAN 19)
3. Comment on the issue in machine learning (4M)(JAN 19)
4. Write candidate elimination algorithm. Apply the algorithm to obtain the final
version space for the training example,
Sl. Sky AirTemp Humidity Wind Water Forecast EnjoySport
No.
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cool High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
(10M) (JAN 19)
5. Discuss about an unbiased learner.(6M) (JAN 19)
6. Define machine learning. Describe the steps in designing learning system. (8M)(JULY
19)
7. Write Find-S algorithm and explain with example. (4m) (JULY 19)
8. Explain List-then-eliminate algorithm. (4M) (JULY 19)
9. List out any 5 applications of machine learning. (5M) (JULY 19)
10. What do you mean by hypothesis space, instance space, and version space? (3M)
(JULY 19)
11. Find the maximally general hypothesis and maximally specific hypothesis for the
training examples given in the table using candidate elimination algorithm.
Sl. Sky AirTemp Humidity Wind Water Forecast EnjoySport
No.
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cool High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes
(08M) (JULY 19)
12. What do you mean by well-posed learning problem? Explain with example. (4M)
(JAN 2020)
13. Explain the various stages involved in designing a learning system in brief. (8M) (JAN
2020)
14. Write the Find_S algorithm and discuss the issues with the algorithm. (4M) (JAN
2020)
15. List the issues in machine learning. (4M) (JAN 2020)
16. Consider the given below training example which finds malignant tumers from MRI
scans.
Example Shape Size Color Surface Thickness Target_Concept
1 Circular Large Light Smooth Thick Malignant
2 Circular Large Light Irregular Thick Malignant
3 Oval Large Dark Smooth Thin Benign
4 Oval Large Light Irregular Thick Malignant
5 Circular Small Light Smooth Thick Benign
(8M) (JAN 2020)
17. Explain the concept of inductive bias in brief. (4M) (JAN 2020)
18. What is machine learning? Explain different perspectives and issues in machine
learning. (6M) (SEP 2020)
19. Explain the steps in designing the learning system. (10M) (SEP 2020)
20. Describe the Candidate Elimination algorithm. Explain its working, taking the enjoy
sport concept and training instances given below.
Sl. Sky AirTemp Humidity Wind Water Forecast EnjoySport
No.
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cool High Strong Warm Change No
4 Sunny Warm High Strong Warm Change Yes
(6M) (SEP 2020)
21. Define Machine Learning. Explain with specific examples. (6M) (Feb 2021)
22. How will you design a learning system? Explain with examples. (6M) (Feb 2021)
23. List and explain perspectives and issues in perspective learning. (4M) (Feb 2021)
24. Define concept learning. Explain the task of concept learning. (6M) (Feb 2021)
25. How the concept learning can be viewed as the task of searching? Explain. (4M) (Feb
2021)
26. Explain with examples:
a. Find-S algorithm
b. Candidate elimination algorithm. (6M) (Feb 2021)
27. Define machine learning. Mention 5 applications of machine learning. (6M) (Feb
2021)
28. Explain concept learning task with an example. (6M) (Feb 2021)
29. Apply candidate elimination algorithm and obtain the version space considering the
training examples given in Table Q1 (c).
Table Q1 (c)
Eyes Nose Head FColor Hair? Smile? (TC)
Round Triangle Round Purple Yes Yes
Square Square Square Green Yes No
Square Triangle Round Yellow Yes Yes
Round Triangle Round Green No No
Square Square Round Yellow Yes Yes
MODULE 3
DECISION TREE LEARNING
ARTIFICIAL NEURAL NETWORKS
Decision Tree Learning
1. What is decision tree? Discuss the use of decision tree for classification purpose with
an example. (8M) (JAN 19)
2. Write and explain the decision tree for the following transactions.
Tid Refund MaritalStatus TaxableIncome Cheat
1 Yes Single 125k No
2 No Married 100k No
3 No Single 70k No
4 Yes Married 120k No
5 No Divorced 95k Yes
6 No Married 60k No
7 Yes Divorced 220k No
8 No Single 85k Yes
9 No Married 75k No
10 No Single 90k Yes
(8M) (JAN 19)
27. Discuss the issues of avoiding the overfitting the data, and handling attributes with
different costs. (8M) (Feb 2021)
28. Define the following terms with an example for each:
a. Decision tree
b. Entropy
c. Information gain
d. Restriction bias
e. Preference bias (10M) (Jul 2021)
29. Construct decision tree for the data set shown in Table Q3(b) to find whether a seed
is poisonous or not.
Example Color Toughness Fungus Appearance Poisonous
1 Green Soft Yes Wrinkled Yes
2 Green Hard Yes Smooth No
3 Brown Soft No Wrinkled No
4 Brown Soft Yes Wrinkled Yes
5 Green Soft Yes Smooth Yes
6 Green Hard No Wrinkled No
7 Orange Soft Yes Wrinkled Yes
(10M) (Jul 2021)
30. Explain ID3 algorithm. Give an example. (10M) (Jul 2021)
31. Explain the issues and solutions to those issues in decision tree learning. (10M) (Jul
2021)
1. Draw the perceptron network with the notation. Derive an equation of gradient
descent rule to minimize the error. (8M) (JAN 19)
2. Explain the importance of the terms: (i) Hidden Layer (ii) Generalization (iii)
Overfitting (iv) Stopping criterion (8M) (JAN 19)
3. Discuss the application of neural network which is used for learning to steer an
autonomous vehicle. (6M) (JAN 19)
4. Write an algorithm for BACKPROPAGATION which uses stochastic gradient
descent method. comment on the effect of adding momentum to the network.(10M)
(JAN 19)
5. Explain artificial neural network based on perception concept with diagram
(6M)(JULY 19)
6. What is gradient descent and delta rule? Why stochastic approximation to gradient
descent if needed?(4M)(JULY 19)
7. Describe the multilayer neural network. Explain why BACKPROPAGATION
algorithm is required. (6M)(JULY 19)
8. Derive the BACKPROPAGATION rule considering the output layer and training
rule for output unit weights. (8M)(JULY 19)
36. Derive expressions for training rule of output and hidden unit weights for back
propagation algorithm. (10M) (Jul 2021)
MODULE 4
BAYESIAN LEARNING
1. What is Bayes theorem and maximum posterior hypothesis. (4M) (JAN 19)
2. Derive an equation for MAP hypothesis using Bayes theorem. (4M) (JAN 19)
3. Consider a football team between two rival teams: Team 0 and Team 1. Suppose
Team 0 wins 95% of the time and Team 1 wins the remaining matches. Among the
games won by Team 0, only 30% of them came from plating on Team 1’s field. On
the other hand, 75% of the victories for Team 1 are obtained while playing at home.
If Team 1 is to host the next match between the two teams, which team will most likely
emerge as the winner? (8M) (JAN 19)
4. Describe Brute Force learning algorithm.( 4M)(JAN 19)
5. Discuss the Naïve Bayes Classifier. (4M)(JAN 19)
6. The following table gives data set about stolen vehicles. Using Naïve Bayes classifier
classify the new data (Red, SUV, Domestic).
Color Type Origin Stolen
Red Sports Domestic Yes
Red Sports Domestic No
Red Sports Domestic Yes
Yellow Sports Domestic No
Yellow Sports Imported Yes
Yellow SUV Imported No
Yellow SUV Imported Yes
Yellow SUV Domestic No
Red SUV Imported No
Red Sports Imported Yes
(8M)(JAN 19)
7. Explain Maximum-a-posterior (MAP) hypothesis using Bayes theorem. (6M)(JULY
19)
8. Estimate conditional probabilities of each attributes {color, legs, height, smelly} for
the species classes: {M, H} using the data given in the table. Using these probabilities
estimate the probability values for the new instance- (color=Green, legs=2, height
=Tall, and smelly=No)
No. Color Legs Height Smelly Species
1 White 3 Short Yes M
2 Green 2 Tall No M
3 Green 3 Short Yes M
4 White 3 Short Yes M
5 Green 2 Short No H
6 White 2 Tall No H
7 White 2 Tall No H
8 White 2 Short Yes H
(10M)(JULY 19)
9. Explain Naïve Bayes Classifier and Bayesian Belief Networks. (10M)(JULY 19)
10. Prove that how maximum likelihood (Bayesian learning) can be used in any learning
algorithms that are used to minimize the squared error between actual output
hypotheses and predicted output hypothesis. (6M)(JULY 19)
11. Explain Naïve Bayes Classifier. (8M) (JAN 2020)
12. Explain Brute force MAP learning algorithm. (8M) (JAN 2020)
13. Discuss Minimum Description Length principle in brief. (8M) (JAN 2020)
14. Explain Bayesian Belief Networks and conditional independence with example. (8M)
(JAN 2020)
15. Explain Naïve Bayes Classifier. (10M) (SEP 2020)
16. Explain Bayesian Belief Networks. (6M) (SEP 2020)
17. Explain EM algorithm. (8M) (SEP 2020)
18. Explain the derivation of K-Means algorithm. (8M) (SEP 2020)
19. List and explain features of Bayesian learning methods. (6M) (Feb 2021)
20. Explain Brute-Force MAP learning algorithm. (5M) (Feb 2021)
21. Explain Maximum Likelihood and least-squared error hypothesis. (5M) (Feb 2021)
22. Describe maximum likelihood hypotheses for predicting probabilities. (5M) (Feb
2021)
23. Define Bayesian Belief networks. Explain with an example. (6M) (Feb 2021)
24. Explain EM algorithm. (5M) (Feb 2021)
25. Explain Bayes theorem and mention the features of Bayesian learning. (7M) (Feb
2021)
26. Prove that a Maximum likelihood hypothesis can be used to predict probabilities.
(8M) (Feb 2021)
27. Explain Naïve Bayes classifier. (6M) (Feb 2021)
28. Describe MAP learning algorithm. (8M) (Feb 2021)
29. Classify the test data and {Red, SUV, Domestic} using Naïve Bayes classifier for the
dataset shown in Table Q8 (b).
Table Q8(b)
Color Type Origin Stolen
Red Sports Domestic Yes
Red Sports Domestic No
Red Sports Domestic Yes
Yellow Sports Domestic No
Yellow Sports Imported Yes
Yellow SUV Imported No
MODULE 5
EVALUATING HYPOTHESIS, INSTANCE-BASED
LEARNING, REINFORCEMENT LEARNING
1. Write short notes on the following:
a. Estimating hypothesis accuracy
b. Binomial distribution (8M)(JAN 19)
2. Discuss the method of comparing two algorithms. Justify with paired t tests methods.
(8M) (JAN 19)
3. Discuss the k-nearest neighbor algorithm. (4M)(JAN 19)
4. Discuss locally weighted regression. (4M)(JAN 19)
5. Discuss the learning tasks and Q learning in the context of reinforcement learning.
(8M)(JAN 19).
6. Explain locally weighted regression linear regression. (8M)(JULY 19)
7. What do you mean by reinforcement learning? How reinforcement learning problem
differs from other function approximation tasks. (5M)(JULY 19)
8. Write down Q-learning algorithm. (3M)(JULY 19).
9. What is instance-based learning? Explain k-nearest neighbor learning? (8M)(JULY
19)
10. Explain sample error, true error, confidence intervals and Q-learning function.
(8M)(JULY 19).
11. Define: (i) Simple Error (ii) True Error. (4M) (JAN 2020)
12. Explain k-Nearest Neighbor learning problem. (8M) (JAN 2020)
13. What is reinforcement learning? (4M) (JAN 2020)
14. Define expected value, variance, standard deviation, and estimate bias of a random
variable. (4M) (JAN 2020)
15. Explain locally weighted linear regression. (89M) (JAN 2020)
16. Write a note on Q-Learning. (4M) (JAN 2020)
17. Explain k-Nearest Neighbor learning algorithm with example. (10M) (SEP 2020)
18. Explain case-based reasoning with example. (6M) (SEP 2020)
19. Write short note on:
a. Q-Learning
b. Radial Basis function
c. Locally Weighted Regression
d. Sampling Theory
(20M) (SEP 2020)
20. Define the following with examples:
a. Sample error
b. True error
c. Mean
d. Variance (8M) (Feb 2021)
21. Explain central limit theorem. (4M) (Feb 2021)
22. Explain K-nearest neighbor algorithm. (4M) (Feb 2021)
23. Explain case-based reasoning. (6M) (Feb 2021)
24. List and explain important differences of reinforcement algorithm with other
function approximation tasks. (4M) (Feb 2021)
25. Explain Q-learning algorithm. (6M) (Feb 2021)
26. Define
a. Sample error
b. True error
c. Confidence intervals (6M) (Feb 2021)
27. Explain K-nearest neighbor learning algorithm. (8M) (Feb 2021)
28. Write a note on Q-learning. (8M) (Feb 2021)
29. Define mean value, variance, standard deviation and estimation bias of a random
variable. (4M) (Feb 2021)
30. Explain locally weighted linear regression and radial basis function. (10M) (Feb 2021)
31. What is reinforcement learning? How it differs from other function approximation
tasks. (6M) (Feb 2021)
32. Explain binomial distribution and write the expressions for its probability
distribution, mean, variance and standard deviation. (4M) (Jul 2021)
33. Define the following terms:
a. Sample error
b. True error
c. N% confidence interval
d. Random variable
e. Expected value
f. Variance (6M) (Jul 2021)
34. Write K-Nearest Neighbor algorithm for approximating a discrete values target
function. Apply the same for the following three-dimensional training data instances
along with one-dimensional output.
x1=5, x2=7, x3=3, y=4
x1=2, x2=4, x3=9, y=8
x1=3, x2=8, x3=1, y=2
x1=7, x2=7, x3=2, y=4
x1=1, x2=9, x3=7, y=8
Consider the query point (x1=5, x2=3, x3=4) and K=3. (10M) (Jul 2021)
35. List the steps used for deriving the confidence intervals. (4M) (Jul 2021)
36. Explain CADIT system using case-based reasoning. (6M) (Jul 2021)
37. Write Q-learning algorithm. Consider the following state s1. Find 𝑸 ̂ (𝒔𝟏 , 𝒂𝒓𝒊𝒈𝒉𝒕 ) for R
given immediate reward as 0 and γ=0.9.