83% found this document useful (6 votes)
15K views8 pages

Class X - Artificial Intelligence - Evaluation - Question Bank

Rajat created a model to predict the performance of Indian cricket players using data on stadiums, bowlers, opponent teams, and player health. This is known as the testing data. Testing data is used to check a model's accuracy and precision after training. Training and testing data are acquired during the data acquisition stage. The training data set is typically larger than the testing data set.

Uploaded by

renudevi198725
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
83% found this document useful (6 votes)
15K views8 pages

Class X - Artificial Intelligence - Evaluation - Question Bank

Rajat created a model to predict the performance of Indian cricket players using data on stadiums, bowlers, opponent teams, and player health. This is known as the testing data. Testing data is used to check a model's accuracy and precision after training. Training and testing data are acquired during the data acquisition stage. The training data set is typically larger than the testing data set.

Uploaded by

renudevi198725
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

“My day begins with gratitude.


CLASS: X
SECTION: ABEF
SUBJECT: ARTIFICIAL INTELLIGENCE
SESSION: 2023-24
UNIT 7: EVALUATION
QUESTION BANK
Q1. Rajat has made a model which predicts the performance of Indian Cricket players in upcoming matches. He
collected the data of players’ performance with respect to stadium, bowlers, opponent team and health. His model
works with good accuracy and precision value. Which of the statement given below is incorrect?
(a) Data gathered with respect to stadium, bowlers, opponent team and health is known as Testing Data.
(b) Data given to an AI model to check accuracy and precision is Testing Data.
(c) Training data and testing data are acquired in the Data Acquisition stage.
(d) Training data is always larger as compared to testing data.
[CBSE SAMPLE PAPER, 2023]
Ans. (a) Data gathered with respect to stadium, bowlers, opponent team and health is known as Testing Data.

Q2. Raunak was learning the conditions that make up the confusion matrix. He came across a scenario in which the
machine that was supposed to predict an animal was always predicting not an animal. What is this condition called?
(a) False Positive
(b) True Positive
(c) False Negative
(d) True Negative [CBSE SAMPLE PAPER, 2022]
Ans. (c) False Negative
Q3. Which of the following statements is not true about overfitting models?
(a) This model learns the pattern and noise in the data to such extent that it harms the performance of the
model on the new dataset.
(b) Training result is very good and the test result is poor.
(c) It interprets noise as patterns in the data.
(d) The training accuracy and test accuracy both are low. [CBSE SAMPLE PAPER, 2022]

Ans. (d) The training accuracy and test accuracy both are low.
Q4. ____________is defined as the percentage of correct predictions out of all the observations.
a) Predictions
b) Accuracy
c) Reality
d) F1 Score [CBSE SAMPLE PAPER, 2021]

Ans. b) Accuracy

Q5. What will be the outcome, if the Prediction is “Yes” and it matches with the Reality? What will be
the outcome, if the Prediction is “Yes” and it does not match the Reality?
a) True Positive, True Negative
b) True Negative, False Negative
c) True Negative, False Positive
d) True Positive, False Positive [CBSE SAMPLE PAPER, 2021]

Ans: d) True Positive, False Positive

www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077


“My day begins with gratitude.”
Q6. Which two evaluation methods are used to calculate F1 Score? [CBSE SAMPLE PAPER, 2022]
(a) Precision and Accuracy
(b) Precision and Recall
(c) Accuracy and Recall
(d) Precision, F1 score

Ans. (b) Precision and Recall

Q7. Recall-Evaluation method is


a) defined as the fraction of positive cases that are correctly identified.
b) defined as the percentage of true positive cases versus all the cases.
where the prediction is true.
c) defined as the percentage of correct predictions out of all the observations.
d) comparison between the prediction and reality.

Ans: a) defined as the fraction of positive cases that are correctly identified.

Q8. Which evaluation parameter takes into consideration all the correct predictions?
[CBSE SAMPLE PAPER, 2023]
Ans. Accuracy

Q9. The output given by the AI machine is known as ________ (Prediction/ Reality). [CBSE SAMPLE PAPER, 2022]
Ans. Prediction

Q10. Define the term Evaluation.


Ans. Evaluation is the process of understanding the reliability of any AI model, based on outputs by feeding
the test dataset into the model and comparing it with actual answers.
Q11. Name two parameters that are considered for the evaluation of a model.
Ans. The two parameters considered for evaluation of a model are: Prediction and Reality
Q12. What is not recommended to evaluate the model?
Ans. It’s not recommended to use the data used to build the model to evaluate the model.

Q13. What do you mean by prediction?


Ans. Prediction refers to the output produced by the AI model.

Q14. What is reality?


Ans. Reality refers to the real scenario, when the prediction has been made by a model.

Q15. Define overfitting.


Ans. The model simply remembers the whole training data set and will always predict the correct label for
any point in the training set. This is known as overfitting.

Q16. Enlist the data sets used in AI modelling.


Ans. There are two types of datasets used in AI.
1. Training Data Set
2. Testing Data Set

www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077


“My day begins with gratitude.”
Q17. What are the cases considered for evaluation?
Ans. 1. True Positive
2. True Negative
3. False Positive
4. False Negative

Q18. Ritika is learning evaluation. She wants to recognize the concept of evaluation from the facts given
below:
• A comparison between prediction and reality
• Helps users to understand the prediction result
• It is not an evaluation matric
• A record that helps in the evaluation
Help Ritika by giving the name to recognize the concept of evaluation.
Ans. The concept is Confusion Matrix
Q19. What do you understand by confusion matrix?
Ans. Confusion matrix is a tabular structure which helps in measuring the performance of an AI model
using the test data.
Q20. Mentions two conditions when prediction matches reality.
Ans. The two conditions when prediction matches reality are:
1. True Positive
2. True Negative
Q21. Shreya is a student of class 10 AI. She wants to know the methods of evaluation. Support her with
your answer.
Ans. The evaluation methods are:
1) Accuracy
2) Precision
3) Recall
4) F1 Score

Q22. What is F1 Score in Evaluation?


Ans: F1 score can be defined as the measure of balance between precision and recall.

Precision ∗ Recall
F1Score = 𝟐 ∗
Precision + Recall

Q23. If a model predicts there is no fire where in reality there is a 3% chance of forest fire breaking out.
What is the accuracy?
Ans. The elements of the formula are as following:
1. True Positive: 0
2. True Negative:97
3. Total Cases: 100
Hence, Accuracy= (97+0)/100=97%
Q24. What do you mean by precision?
Ans. The percentage of true positive cases versus all the cases where the prediction is True is known as
prediction.
Q25. Which cases are taken into account by precision?
www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077
“My day begins with gratitude.”
Ans. True Positives and False Positives cases are taken into account by precision.
Q26. Which cases are taken into account by the recall method?
Ans. True Positives and False Negatives cases taken into account by recall method.
Q27. Which measures are used to know the performance of the model?
Ans. There are two measures used to know the performance of the model: Recall and Precision
Q28. Rohit is working on the AI model. He wanted to know the balance between precision and recall. What
it is?
Ans. The balance between precision and recall is known F1 score.

Q29. Define the following terms:


a. Positive:
b. Negative:
c. True Positive
d. True Negative
e. False Positive (Type 1 error)
f. False Negative (Type 2 error)
Ans.
a. Positive: The prediction is positive for the scenario. For example, there will be Board exams.
b. Negative: The prediction is negative for the scenario. For example, there will be no Board exams
conducted this year.
c. True Positive: The predicted value matches the actual value i.e. the actual value was positive and the
model predicted a positive value.
d. True Negative: The predicted value matches the actual value i.e. the actual value was negative and the
model predicted a negative value.
e. False Positive (Type 1 error): The predicted value was falsely predicted i.e. the actual value was negative
but the model predicted a positive value.
f. False Negative (Type 2 error): The predicted value was falsely predicted i.e. the actual value was positive
but the model predicted a negative value.

Q30. The task is to correctly identify the mobile phones as each, where photos of oppo and Vivo phones
are taken into consideration. Oppo phones are the positive cases and Vivo phones are negative cases. The
model is given 10 images of Oppo and 15 images of Vivo phones. It correctly identifies 8 Oppo phones and
12 Vivo phones. Create a confusion matrix for the particular cases.
Ans.

CONFUSION MATRIX REALITY

YES NO

PREDICTION YES 8-TP 3-FP

NO 2-FN 12-TN

www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077


“My day begins with gratitude.”

Q31. There are some images of boys and girls. The girls are positive cases and boys are negative cases. The
model is given 20 images of girls and 30 images of boys. The machine correctly identifies 12 girls and 23
boys. Create a confusion matrix for the particular cases.

Ans.
CONFUSION MATRIX REALITY

YES NO

PREDICTION YES 12-TP 7-FP

NO 8-FN 23-TN

Q32. There is data given for Facebook and Instagram users. The model is given data for 200 Facebook users
and 250 Instagram users. The machine identified 120 Facebook users correctly and 245 users of Instagram
correctly. Create a confusion matrix for the same.

Ans.
CONFUSION MATRIX REALITY

YES NO

PREDICTION YES 245-TP 80-FP

NO 5-FN 120-TN

Q33. What is a confusion matrix? Explain in detail.


Ans. a. A Confusion Matrix is a table that is often used to describe the performance of a classification model
(or "classifier") on a set of test data for which the true values are known.
b. Confusion Matrix provides a more insightful picture which is not only the performance of a predictive
model, but also which classes are being predicted correctly and incorrectly, and what type of errors are being
made.

In the confusion matrix:

• The target variable has two values: Positive and Negative


• The columns represent the actual values of the target variable.
• The rows represent the predicted values of the target variable.

www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077


“My day begins with gratitude.”

Q34. Differentiate between Precision and Recall.


Precision Recall
Ans.
The ratio between the True Positives and Recall is the measure of our model
all the positives. correctly identifying True Positives.

A higher precision leads to a lower recall. Higher Recall leads to a lower precision.

Q35. Differentiate between True Positive and False Positive with the help of an example.
Ans.

True Positive False Positive

The model correctly predicts the positive The model correctly predicts the positive
class: Prediction-Yes Reality-Yes class: Prediction-Yes Reality-No

Example: Example:
A blood test correctly diagnosing that a A blood test diagnosing that the patient has
patient has diabetes. diabetes when in reality the patient is not
diabetic.

Q36. List down the importance of Evaluation Process.


Ans. a. Evaluation is an important step of AI project cycle. To avoid overfitting, a model should be
evaluated with test data – data not seen by the model.
b. Evaluation ensures that the model is operating correctly and optimally.
c. Evaluation is an initiative to understand how well it achieve its goals.
d. Evaluation helps to determine what works well and what could be improved in a program.

Q37. Consider that there are 10 images. Out of these 7 are apples and 3 are bananas. Kirti has run the
model on the images and it catches 5 apples correctly and 2 bananas correctly. What is the accuracy of the
model?
Ans. Total correct predictions are: 5 + 2 = 7
Total predictions made: 5 + 2
7
So, accuracy is: *100%= 100%.
7
The model does not predict all of the images, but whatever predictions it makes are correct. Hence accuracy
is 100%.
Q38. The prediction of the model is 1/4 and the recall of the same is 2/4. What is the F1 score of the
model?
Ans. F1 score= 2x((precision x recall) /(precision + recall))
= 2 x (1/4×2/4)/(1/4+2/4)
= 1.3444/4
= 0.3361

www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077


“My day begins with gratitude.”
Q39. Calculate Accuracy, Precision, Recall and F1 Score for the following Confusion Matrix on SPAM
FILTERING. Also suggest which metric would not be a good evaluation parameter here and why?

Confusion Matrix on Reality


SPAM FILTERING:

Prediction: 10 55

10 25

Ans.
a. Accuracy= (TP+TN)/(TP+TNFP+FN)*100%
= (10+25)/(10+25+55+10)
=35/100 = 0.35*100%=35
b. Precision= TP/TP+FP*100%
=10/(10+55)=10/65=0.15*100%=15
c. Recall = TP/TP+FN
=10/(10+10)=10/20=0.5
d. F1 score = 2 * ((Precision * Recall/Precision +Recall))
= 2 * ((0.15 * 0.5) / (0.15 + 0.5))
= 2 * (0.075 / 0.65)= 2 * 0.115= 0.23
Here within the test, there is a tradeoff. But Precision is not a good Evaluation metric. Precision metric
needs to improve more. Because,
False Positive (impacts Precision): Mail is predicted as “spam” but it is not.
False Negative (impacts Recall): Mail is predicted as “not spam” but it is spam.
So, too many False Negatives will make the Spam Filter ineffective whereas False Positives may cause
important mails to be missed. Hence, Precision is more important to improve.

Q 40. An AI model made the following sales prediction for a new mobile phone which they have recently
launched: [CBSE SAMPLE PAPER, 2023]

Reality

Confusion Matrix Yes No

Yes 50 40
Prediction
No 12 10

(i) Identify the total number of wrong predictions made by the model.
Calculate precision, recall and F1 Score.

Ans. (i)the total number of wrong predictions made by the model is the sum of false positive and false negative.
FP+FN=40+12= 52
(ii)Precision=TP/(TP+FP) =50/(50+40) 50/90 =0.55
Recall=TP/(TP+FN) =50/(50+12) =50/62 =.81
F1 Score = 2*Precision*Recall/(Precision+Recall)
=2*0.55*.81/(.55+.81)
=.891/1.36
=0.65
www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077
“My day begins with gratitude.”
Q41. Deduce the formula of F1 Score? What is the need of its formulation?
Ans.
a. The F1 Score, also called the F score or F measure, is a measure of a test’s accuracy. It is calculated from the
precision and recall of the test, where the precision is the number of correctly identified positive results divided
by the number of all positive results, including those not identified correctly, and the recall is the number of
correctly identified positive results divided by the number of all samples that should have been identified as
positive.
b. The F1 score is defined as the weighted harmonic mean of the test’s precision and recall.
c. This score is calculated according to the formula.

F1 score = 2 * ((Precision * Recall/Precision +Recall))


It is necessary because:
• F-Measure provides a single score that balances both the concerns of precision and recall in one
number.
• A good F1 score means that you have low false positives and low false negatives. An F1 score is
considered perfect when it’s 1, while the model is a total failure when it’s 0.
• F1 Score is a better metric to evaluate our model on real-life classification problems and when
imbalanced class distribution exists.

www.queensvalleyschool.in Sector-8, Phase-I, Dwarka, New Delhi-110077

You might also like