Page 164 - Ai_C10_Flipbook
P. 164
2. Which evaluation technique involves dividing the dataset into training and testing subsets?
a. Precision b. Gradient Boosting
c. Train-test split d. Recall
3. In which scenario is a model said to be "underfitting"?
a. The model performs poorly on both training and test sets
b. The model performs well on both training and test sets
c. The model memorizes the training data but fails to generalize
d. The model performs well on the training set but poorly on the test set
4. What is True Positive (TP) in the confusion matrix?
a. When the model predicts a negative value correctly
b. When the model predicts a negative value incorrectly
c. When the model predicts a positive value incorrectly
d. When the model predicts a positive value correctly
5. In a confusion matrix, the rows represent the ………………………. values of the target variable.
a. Predicted b. Actual
c. Desired d. Assigned
6. Which metric is most suitable when you want to minimise false positives?
a. Accuracy b. Precision
c. Recall d. F1 Score
7. Which of the following statements is true about F1 Score?
a. F1 score is the average of precision and recall.
b. F1 score only considers false negatives.
c. F1 score is the sum of precision and recall.
d. F1 score is always equal to the accuracy of the model.
8. What does the recall metric measure in a classification problem?
a. The proportion of true positive instances out of all predicted positive instances
b. The proportion of actual positive instances that were correctly identified
c. The overall accuracy of the model
d. The proportion of false negatives out of all predicted negative instances
9. Which of the following is true about a confusion matrix?
a. The confusion matrix shows only the correct predictions of a model.
b. The diagonal elements of a confusion matrix represent the false positives and false negatives.
c. The confusion matrix can be used to calculate accuracy, precision, recall, and F1 score.
d. The confusion matrix is used only for regression problems.
10. Which of this is a classification use case example? [CBSE Handbook]
a. House Price prediction b. Credit card fraud
c. Salary prediction d. None of these
11. A teacher's marks prediction system predicts the marks of a student as 75, but the actual marks obtained by the student
are 80. What is the absolute error in the prediction? [CBSE Handbook]
a. 5 b. 10
c. 15 d. 20
162 Artificial Intelligence Play (Ver 1.0)-X

