0% found this document useful (0 votes)
53 views7 pages

2024 - PCS - 24P2CSC04 - Question Bank ML

The document is a question bank for a course on regression and machine learning concepts, containing multiple-choice and descriptive questions. It covers topics such as linear regression, logistic regression, gradient descent, and PAC learning, along with their respective answers. The document serves as a study resource for understanding key concepts and methodologies in data analysis and machine learning.

Uploaded by

Boomika G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views7 pages

2024 - PCS - 24P2CSC04 - Question Bank ML

The document is a question bank for a course on regression and machine learning concepts, containing multiple-choice and descriptive questions. It covers topics such as linear regression, logistic regression, gradient descent, and PAC learning, along with their respective answers. The document serves as a study resource for understanding key concepts and methodologies in data analysis and machine learning.

Uploaded by

Boomika G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank

S.NO UNIT CHAP. PART Q.TYPE QUESTION CO KL QL MARK

1 1 1 A OBJ In which category does linear regression belong to? CO1 An E 1

A. Neither supervised nor unsupervised learning B. Both supervised and unsupervised learning

C. Unsupervised learning D. Supervised learning

Answer is : D
2 1 1 A OBJ The learner is trying to predict housing prices based on the size of each house. What type of regression is this? CO1 An E 1

A. Multivariate Logistic Regression B. Logistic Regression

C. Multivariate Linear Regression D. Linear Regression

Answer is : D
3 1 1 A OBJ The learner is trying to predict housing prices based on the size of each house. The variable “size” is ___________ CO1 An E 1

A. dependent variable B. label set variable

C. independent variable D. target variable

Answer is : C
4 1 1 A OBJ The target variable is represented along ____________ CO1 An E 1

A. Y axis B. X axis

C. Either Y-axis or X-axis, it doesn’t matter D. Depends on the dataset

Answer is : A
5 1 1 A OBJ How many variables are required to represent a linear regression model? CO1 An E 1

A. 3 B. 2

C. 1 D. 4

Answer is : A
6 1 1 A OBJ Can a cancer detection problem be solved by logistic regression? CO1 An E 1

Sometimes
A. B. No

C. Yes D. Depends on the dataset

Answer is : C
7 1 1 A OBJ n a logistic regression problem, there are 300 instances. 270 people voted. 30 people did not cast their votes. What is CO1 An E 1
the probability of finding a person who cast one’s vote?

A. 10% B. 90%

C. 0.9 D. 0.1

Answer is : C
8 1 1 A OBJ Who invented logistic regression? CO1 An E 1

A. Vapnik B. Ross Quinlan

C. DR Cox D. Chervonenkis

Answer is : C
9 1 1 A OBJ The hypothesis is given by h(x) = t0 + t1x. What are t0 and t1? CO1 An E 1

Value of h(x) when x is 0, the rate at which h(x)


A. Value of h(x) when x is 0, intercept along y-axis B.
changes with respect to x
The rate at which h(x) changes with respect to x, Intercept along the y-axis, the rate at which h(x)
C. D.
intercept along the y-axis changes with respect to x
Answer is : D

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 1/7
1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank
10 1 1 A OBJ Which of the following statements is false about Ensemble voting? CO1 An E 1

A. It takes a linear combination of the learners B. It takes non-linear combination of the learners

C. It is the simplest way to combine multiple classifiers D. It is also known as ensembles and linear opinion pools

Answer is : B
11 1 1 A OBJ Which of the following is a solution for the problem, where the classifiers erroneously give unusual low or high support to CO1 R E 1
a particular class?

A. Maximum rule B. Minimum rule

C. Product rule D. Trimmed mean rule

Answer is : D
12 1 1 A OBJ The cost function is minimized by __________ CO1 R E 1

A. Linear regression B. Polynomial regression

C. PAC learning D. Gradient descent

Answer is : D
13 1 1 A OBJ What happens when the learning rate is low? CO1 R E 1

A. It always reaches the minima quickly B. It reaches the minima very slowly

C. It overshoots the minima D. Nothing happens

Answer is : B
14 1 1 A OBJ Which of the following statements is false about gradient descent? CO1 R E 1

It updates the weight to comprise a small step in the


A. B. The learning rate parameter is η where η > 0
direction of the negative gradient
In each iteration, the gradient is re-evaluated for the In each iteration, the weight is updated in the direction
C. D.
new weight vector of positive gradient
Answer is : D
15 1 1 A OBJ What is the gradient of the function 2x2 – 3y2 + 4y – 10 at point (0, 0)? CO1 R E 1

A. 0i + 4j B. 1i + 10j

C. 2i – 3j D. -3i + 4j

Answer is : A
16 1 1 A OBJ Which of the following is not related to a gradient descent? CO1 R E 1

A. AdaBoost B. Adadelta

C. Adagrad D. RMSprop

Answer is : A
17 1 1 A OBJ Which of the following statements is not true about Kernel methods? CO1 R E 1

A. It can be used for pattern analysis or pattern recognition B. It maps the data into higher dimensional space

The data can be easily separated in the higher


C. D. It only leads to finite dimensional space
dimensional space
Answer is : D
18 1 1 A OBJ The independent variable is used to explain the dependent variable in ________ CO1 R E 1

Liner Regression Analysis


A. B. Multiple Regression Analysis

C. Non Linear Regression Analysis D. None of the above

Answer is : A

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 2/7
1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank
19 1 1 A OBJ The original hypothesis is known as ______. CO1 R E 1

A. Alternate Hypothesis B. Null Hypothesis

C. Both A and B are Incorrect D. Both A and B are correct

Answer is : B
20 1 1 A OBJ The learner is trying to predict housing prices based on the size of each house. The variable “size” is ___________ CO1 R E 1

A. dependent variable B. label set variable

C. independent variable D. target variable

Answer is : C
21 1 1 A OBJ The hypothesis is given by h(x) = t0 + t1x. What is the goal of t0 and t1? CO1 R E 1

Give h(x) as close to 0 as possible, without themselves


A. Give negative h(x) B.
being 0

C. Give h(x) as close to y, in training data, as possible D. Give h(x) closer to x than y

Answer is : C
22 1 1 A OBJ In the simplified hypothesis, what does hypothesis H and cost function J depend on? CO1 R E 1

A. Both are functions of x B. J is a function of x, H is a function of t1

C. H is a function of x, J is a function of t1 D. Both are functions of t1

Answer is : C
23 1 1 A OBJ How are the points in the domain set given as input to the algorithm? CO1 R E 1

A. Vector of features B. Scalar points

C. Polynomials D. Clusters

Answer is : A
24 1 1 A OBJ To which input does the learner has access to? CO1 R E 1

A. Testing Data B. Label Data

C. Training Data D. Cross-Validation Data

Answer is : C
25 1 1 A OBJ What is the learner’s output also called? CO1 R E 1

A. Predictor, or Hypothesis, or Classifier B. Predictor, or Trainer, or Classifier

C. Predictor, or Hypothesis, or Trainer D. Trainer, or Hypothesis, or Classifier

Answer is : A
26 1 1 B DESC CO1 R E 7
Describe Regression with example.

27 1 1 B DESC CO1 R E 7
Describe Correlation Analysis with example.

28 1 1 B DESC CO1 R E 7
Explain Logistic Regression with example.

29 1 1 B DESC CO1 R E 7
Decribe Gradient descent with example.

30 1 1 B DESC CO1 R E 7
List out principle of component ANalysis with example.

31 1 1 C DESC CO1 Ap E 10
Explain Types of Correlation alalysis with example.

32 1 1 C DESC CO1 Ap E 10
Explain in detail about types of gradient analysis with example.

33 1 1 C DESC CO1 Ap E 10
Explain Factor analysis in K-Means with example.

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 3/7
1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank
34 1 1 C DESC CO1 R E 10
Explain Discriminate analysis with example.

35 1 1 C DESC CO1 R E 10
Explain Rank correlation ,Partial and Multiple Correlation with excample.

36 2 1 A OBJ G = (<sunny, ?, ?, ?> ; <?, warm, ?, ?> ; <?, ?, high, ?>). Training data = <sunny, warm, normal, same> => Yes CO1 R E 1
(positive example). How will G be represented after encountering this training data?

A. <phi, phi, phi, phi> B. (<sunny, ?, ?, ?> ; <?, warm, ?, ?> ; <?, ?, high, ?>)

C. (<sunny, ?, ?, ?> ; <?, warm, ?, ?>) D. <?, ?, ?, ?>

Answer is : C
37 2 1 A OBJ The error of h with respect to c is the probability that a randomly drawn instance will fall into the region where CO1 R E 1
_________

A. h and c disagree B. h and c agree

C. h is greater than c but not less D. h is lesser than c but not greater

Answer is : A
38 2 1 A OBJ What function is used for hypothesis representation in logistic regression? CO1 R E 1

A. Cos function B. Laplace transformation

C. Lagrange’s function D. Sigmoid function

Answer is : D
39 2 1 A OBJ If distribution D assigns zero probability to instances where h not equal to c, then an error will be ______ CO1 R E 1

A. 1 B. 0.5

C. 0 D. infinite

Answer is : C
40 2 2 A OBJ Who introduced the concept of PAC learning? CO2 R E 1

A. Francis Galton B. Reverend Thomas Bayes

C. Ross Quinlan D. Leslie Valiant

Answer is : D
41 2 2 A OBJ One of the goals of PAC learning is to give __________ CO2 R E 1

A. maximum accuracy B. cross-validation complexity

C. error of classifier D. computational complexity

Answer is : D
42 2 2 A OBJ How do we learn concepts from training examples? CO2 R E 1

A. Arbitrarily B. Decremental

C. Incrementally D. Non-incremental

Answer is : C
43 2 2 A OBJ In the list-then-eliminate algorithm, the initial version space contains _____ CO2 R E 1

A. most specific hypothesis B. all hypotheses in H

C. most accurate hypothesis D. most general hypothesis

Answer is : B

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 4/7
1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank
44 2 2 A OBJ dataset with 4 attributes, which is the most general hypothesis? CO2 R E 1

A. (Sunny, Warm, Strong, Humid) B. (Sunny, ?, ?, ?)

C. (?, ?, ?, ?) D. (phi, phi, phi, phi)

Answer is : C
45 2 2 A OBJ What does VC dimension do? CO2 U E 1

A. Reduces complexity of hypothesis space B. Removes noise from dataset

C. Measures complexity of training dataset D. Measures the complexity of hypothesis space H

Answer is : D
46 2 2 A OBJ What is the VC dimension of a straight line? CO2 U E 1

A. 3 B. 2

C. 4 D. 0

Answer is : A
47 2 2 A OBJ What is the relation between VC dimension and hypothesis space H? CO2 U E 1

A. VC(H) <= |H| B. VC(H) != log2|H|

C. VC(H) <= log2|H| D. VC(H) > log2|H|

Answer is : C
48 2 2 A OBJ What is the advantage of VC dimension over PAC learning? CO2 U E 1

A. VC dimension reduces complexity of training data B. VC dimension outputs more accurate predictors

C. VC dimension can work for infinite hypothesis space D. There is no advantage

Answer is : C
49 2 2 A OBJ What is present in the version space of the Find-S algorithm in the beginning? CO2 U E 1

Both maximally general and maximally specific


A. Set of all hypotheses H B.
hypotheses

C. Maximally general hypothesis D. Maximally specific hypothesis

Answer is : D
50 2 2 A OBJ What is one of the advantages of the Find-S algorithm? CO2 U E 1

Computation is faster than other concept learning


A. B. All correct hypotheses are output
algorithms

C. Most generalized hypothesis is output D. Overfitting does not occur

Answer is : A
51 2 2 A OBJ Candidate-Elimination algorithm can be described by ____________ CO2 U E 1

A. just a set of candidate hypotheses B. depends on the dataset

C. set of instances, set of candidate hypotheses D. just a set of instances

Answer is : C
52 2 2 A OBJ S = <phi, phi, phi, phi>Training data = <rainy, cold, normal, change> => No (negative example). How will S be CO2 U E 1
represented after encountering this training data?

A. <phi, phi, phi, phi> B. <sunny, warm, high, same>

C. <rainy, cold, normal, change> D. <?, ?, ?, ?>

Answer is : A

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 5/7
1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank
53 2 2 A OBJ What is one of the drawbacks of Empirical Risk Minimization? CO2 U E 1

A. Underfitting B. Both Overfitting and Underfitting

C. Overfitting D. No drawbacks

Answer is : C
54 2 2 A OBJ What is machine learning? CO2 R E 1

A. A type of computer B. A technique for teaching computers to learn from data

C. A programming language D. A hardware device

Answer is : B
55 2 2 A OBJ What is the primary goal of supervised learning? CO2 R E 1

A. Minimize errors in predictions B. Maximize computational efficiency

C. Predict future events D. Learn from unlabeled data

Answer is : A
56 2 2 A OBJ Which of the following is an example of an unsupervised learning algorithm? CO2 R E 1

A. Linear Regression B. K-Means Clustering

C. Decision Trees D. Support Vector Machines

Answer is : B
57 2 2 A OBJ In classification, what does the term "class label" refer to? CO2 R E 1

A. The name of the model B. The output of a regression model

C. The predicted category of an input D. The input features of a model

Answer is : C
58 2 2 A OBJ What is the purpose of regularization in machine learning models? CO2 R E 1

A. To increase model complexity B. To decrease model complexity

C. To speed up model training D. To increase bias

Answer is : B
59 2 2 A OBJ What is the difference between precision and recall? CO2 U E 1

Precision focuses on false positives, recall focuses on


A. Both measure the same thing B.
false negatives
Precision focuses on false negatives, recall focuses on
C. D. Precision and recall are unrelated metrics
false positives
Answer is : B
60 2 2 A OBJ hat is the purpose of feature scaling in machine learning? CO2 U E 1

A. To remove outliers from the data B. To standardize the range of features

C. To increase the complexity of models D. To decrease the dimensionality of features

Answer is : B
61 2 2 B DESC CO2 Ap E 7
Describe Version Space with example.

62 2 2 B DESC CO2 Ap E 7
List out and explain the Issues in Machine Learning with example.

63 2 2 B DESC CO2 Ap E 7
Explain PAC Learning with example.

64 2 2 B DESC CO2 Ap E 7
Describe VC Dimension with example.

65 2 2 B DESC CO2 Ap E 7
Describe Supervised Machine Learning with example.

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 6/7
1/20/25, 4:47 PM 2024_PCS_24P2CSC04_Question Bank

66 2 2 C DESC CO2 E E 10
Explain Canditate Elimination algorithm with suitable traning and testing data set.

67 2 2 C DESC CO2 E E 10
Explain in detail about Find-S algorithm with suitable hypothesis.

68 2 2 C DESC CO2 E E 10
Briefly explain Fininte and Infinite hypothesis with suitable example.

69 2 2 C DESC CO2 E E 10
Explain PAC Learing with suitable example.

70 2 2 C DESC CO2 E E 10
Listout and explain Challenges in machine learning with example.

192.168.236.3/examportal/admin/PrintQuestions.php?q=MjAyNHx8UENTfHwyNFAyQ1NDMDQ= 7/7

You might also like