843 - Ai - MS 2 2024-25
843 - Ai - MS 2 2024-25
843 - Ai - MS 2 2024-25
iv. a. A - 3; B - 2; C - 1 3 1
v. d. Empathy 4 1
Explanation: Rina is showing an understanding of her colleagues' struggles and is
offering help, which reflects empathy.
iv. d. 1 & 4 2 1
v. c. iterative 1 1
Explanation: Iterative processes involve repeated cycles of testing and refining to
improve model accuracy.
vi. a. Permanent 2 1
Explanation: Deployment is generally a permanent phase, as it involves putting the
model into production, making it accessible to users, and ensuring it serves its
intended purpose.
Q. 3
i. b. Data Preparation 1 1
Explanation: Data Preparation is the stage where data is organized and cleaned
to ensure it is ready for analysis.
ii. a. Engaging 3 1
Explanation: Engaging stories capture the audience's attention by connecting
with them on a personal level.
iii. c. Design 2 1
Explanation: The Design phase is where models are created, and algorithms are
selected based on project goals.
vi. b. deployment 2 1
Q. 4
i. a. Scoping 2 1
Explanation: Scoping is the initial phase where goals, success metrics, and
project expectations are defined.
v. c. 1-3-2-4 2 1
Explanation: The process starts by organizing data, then identifying
relationships, visualizing, and finally crafting a narrative with insights.
Q. 5
i. d. numbers 3 1
Explanation: Stories can make numbers or statistical data more engaging and
relatable.
Q. 16 The Build phase involves the actual implementation of the model as defined in the 2 2
Design phase. This includes coding the algorithms, training the model on the
training dataset, and iterating on the design to improve performance, whereas the
Design phase focuses on planning and strategy.
Q. 18 RMSE values indicate the average distance between the predicted and actual 1 4
values. A lower RMSE value signifies a better fit of the model to the data, while a
higher RMSE indicates larger errors. In context, RMSE can be used to compare the
performance of different models, helping to select the most accurate one.
Q. 20 In k-fold cross-validation, the dataset is divided into k equally sized subsets (or 1 4
folds).
The model is trained on k-1 folds and validated on the remaining fold.
This process is repeated k times, with each fold serving as the validation set
once.
The final performance metric is the average of the k individual tests, providing
a robust evaluation of the model.