Class X - Artificial Intelligence - Evaluation - Question Bank
Class X - Artificial Intelligence - Evaluation - Question Bank
”
CLASS: X
SECTION: ABEF
SUBJECT: ARTIFICIAL INTELLIGENCE
SESSION: 2023-24
UNIT 7: EVALUATION
QUESTION BANK
Q1. Rajat has made a model which predicts the performance of Indian Cricket players in upcoming matches. He
collected the data of players’ performance with respect to stadium, bowlers, opponent team and health. His model
works with good accuracy and precision value. Which of the statement given below is incorrect?
(a) Data gathered with respect to stadium, bowlers, opponent team and health is known as Testing Data.
(b) Data given to an AI model to check accuracy and precision is Testing Data.
(c) Training data and testing data are acquired in the Data Acquisition stage.
(d) Training data is always larger as compared to testing data.
[CBSE SAMPLE PAPER, 2023]
Ans. (a) Data gathered with respect to stadium, bowlers, opponent team and health is known as Testing Data.
Q2. Raunak was learning the conditions that make up the confusion matrix. He came across a scenario in which the
machine that was supposed to predict an animal was always predicting not an animal. What is this condition called?
(a) False Positive
(b) True Positive
(c) False Negative
(d) True Negative [CBSE SAMPLE PAPER, 2022]
Ans. (c) False Negative
Q3. Which of the following statements is not true about overfitting models?
(a) This model learns the pattern and noise in the data to such extent that it harms the performance of the
model on the new dataset.
(b) Training result is very good and the test result is poor.
(c) It interprets noise as patterns in the data.
(d) The training accuracy and test accuracy both are low. [CBSE SAMPLE PAPER, 2022]
Ans. (d) The training accuracy and test accuracy both are low.
Q4. ____________is defined as the percentage of correct predictions out of all the observations.
a) Predictions
b) Accuracy
c) Reality
d) F1 Score [CBSE SAMPLE PAPER, 2021]
Ans. b) Accuracy
Q5. What will be the outcome, if the Prediction is “Yes” and it matches with the Reality? What will be
the outcome, if the Prediction is “Yes” and it does not match the Reality?
a) True Positive, True Negative
b) True Negative, False Negative
c) True Negative, False Positive
d) True Positive, False Positive [CBSE SAMPLE PAPER, 2021]
Ans: a) defined as the fraction of positive cases that are correctly identified.
Q8. Which evaluation parameter takes into consideration all the correct predictions?
[CBSE SAMPLE PAPER, 2023]
Ans. Accuracy
Q9. The output given by the AI machine is known as ________ (Prediction/ Reality). [CBSE SAMPLE PAPER, 2022]
Ans. Prediction
Q18. Ritika is learning evaluation. She wants to recognize the concept of evaluation from the facts given
below:
• A comparison between prediction and reality
• Helps users to understand the prediction result
• It is not an evaluation matric
• A record that helps in the evaluation
Help Ritika by giving the name to recognize the concept of evaluation.
Ans. The concept is Confusion Matrix
Q19. What do you understand by confusion matrix?
Ans. Confusion matrix is a tabular structure which helps in measuring the performance of an AI model
using the test data.
Q20. Mentions two conditions when prediction matches reality.
Ans. The two conditions when prediction matches reality are:
1. True Positive
2. True Negative
Q21. Shreya is a student of class 10 AI. She wants to know the methods of evaluation. Support her with
your answer.
Ans. The evaluation methods are:
1) Accuracy
2) Precision
3) Recall
4) F1 Score
Precision ∗ Recall
F1Score = 𝟐 ∗
Precision + Recall
Q23. If a model predicts there is no fire where in reality there is a 3% chance of forest fire breaking out.
What is the accuracy?
Ans. The elements of the formula are as following:
1. True Positive: 0
2. True Negative:97
3. Total Cases: 100
Hence, Accuracy= (97+0)/100=97%
Q24. What do you mean by precision?
Ans. The percentage of true positive cases versus all the cases where the prediction is True is known as
prediction.
Q25. Which cases are taken into account by precision?
www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077
“My day begins with gratitude.”
Ans. True Positives and False Positives cases are taken into account by precision.
Q26. Which cases are taken into account by the recall method?
Ans. True Positives and False Negatives cases taken into account by recall method.
Q27. Which measures are used to know the performance of the model?
Ans. There are two measures used to know the performance of the model: Recall and Precision
Q28. Rohit is working on the AI model. He wanted to know the balance between precision and recall. What
it is?
Ans. The balance between precision and recall is known F1 score.
Q30. The task is to correctly identify the mobile phones as each, where photos of oppo and Vivo phones
are taken into consideration. Oppo phones are the positive cases and Vivo phones are negative cases. The
model is given 10 images of Oppo and 15 images of Vivo phones. It correctly identifies 8 Oppo phones and
12 Vivo phones. Create a confusion matrix for the particular cases.
Ans.
YES NO
NO 2-FN 12-TN
Q31. There are some images of boys and girls. The girls are positive cases and boys are negative cases. The
model is given 20 images of girls and 30 images of boys. The machine correctly identifies 12 girls and 23
boys. Create a confusion matrix for the particular cases.
Ans.
CONFUSION MATRIX REALITY
YES NO
NO 8-FN 23-TN
Q32. There is data given for Facebook and Instagram users. The model is given data for 200 Facebook users
and 250 Instagram users. The machine identified 120 Facebook users correctly and 245 users of Instagram
correctly. Create a confusion matrix for the same.
Ans.
CONFUSION MATRIX REALITY
YES NO
NO 5-FN 120-TN
A higher precision leads to a lower recall. Higher Recall leads to a lower precision.
Q35. Differentiate between True Positive and False Positive with the help of an example.
Ans.
The model correctly predicts the positive The model correctly predicts the positive
class: Prediction-Yes Reality-Yes class: Prediction-Yes Reality-No
Example: Example:
A blood test correctly diagnosing that a A blood test diagnosing that the patient has
patient has diabetes. diabetes when in reality the patient is not
diabetic.
Q37. Consider that there are 10 images. Out of these 7 are apples and 3 are bananas. Kirti has run the
model on the images and it catches 5 apples correctly and 2 bananas correctly. What is the accuracy of the
model?
Ans. Total correct predictions are: 5 + 2 = 7
Total predictions made: 5 + 2
7
So, accuracy is: *100%= 100%.
7
The model does not predict all of the images, but whatever predictions it makes are correct. Hence accuracy
is 100%.
Q38. The prediction of the model is 1/4 and the recall of the same is 2/4. What is the F1 score of the
model?
Ans. F1 score= 2x((precision x recall) /(precision + recall))
= 2 x (1/4×2/4)/(1/4+2/4)
= 1.3444/4
= 0.3361
Prediction: 10 55
10 25
Ans.
a. Accuracy= (TP+TN)/(TP+TNFP+FN)*100%
= (10+25)/(10+25+55+10)
=35/100 = 0.35*100%=35
b. Precision= TP/TP+FP*100%
=10/(10+55)=10/65=0.15*100%=15
c. Recall = TP/TP+FN
=10/(10+10)=10/20=0.5
d. F1 score = 2 * ((Precision * Recall/Precision +Recall))
= 2 * ((0.15 * 0.5) / (0.15 + 0.5))
= 2 * (0.075 / 0.65)= 2 * 0.115= 0.23
Here within the test, there is a tradeoff. But Precision is not a good Evaluation metric. Precision metric
needs to improve more. Because,
False Positive (impacts Precision): Mail is predicted as “spam” but it is not.
False Negative (impacts Recall): Mail is predicted as “not spam” but it is spam.
So, too many False Negatives will make the Spam Filter ineffective whereas False Positives may cause
important mails to be missed. Hence, Precision is more important to improve.
Q 40. An AI model made the following sales prediction for a new mobile phone which they have recently
launched: [CBSE SAMPLE PAPER, 2023]
Reality
Yes 50 40
Prediction
No 12 10
(i) Identify the total number of wrong predictions made by the model.
Calculate precision, recall and F1 Score.
Ans. (i)the total number of wrong predictions made by the model is the sum of false positive and false negative.
FP+FN=40+12= 52
(ii)Precision=TP/(TP+FP) =50/(50+40) 50/90 =0.55
Recall=TP/(TP+FN) =50/(50+12) =50/62 =.81
F1 Score = 2*Precision*Recall/(Precision+Recall)
=2*0.55*.81/(.55+.81)
=.891/1.36
=0.65
www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077
“My day begins with gratitude.”
Q41. Deduce the formula of F1 Score? What is the need of its formulation?
Ans.
a. The F1 Score, also called the F score or F measure, is a measure of a test’s accuracy. It is calculated from the
precision and recall of the test, where the precision is the number of correctly identified positive results divided
by the number of all positive results, including those not identified correctly, and the recall is the number of
correctly identified positive results divided by the number of all samples that should have been identified as
positive.
b. The F1 score is defined as the weighted harmonic mean of the test’s precision and recall.
c. This score is calculated according to the formula.