Page 244 - AI Ver 3.0 class 10_Flipbook
P. 244
8. A True Positive occurs when the model predicts a ……….……................ outcome, and the actual outcome is also
……….…….................
9. The formula for classification accuracy is: Classification Accuracy = ……….……................ / ……….…….................
10. In the case of a model predicting credit card fraud, the model may use the confusion matrix to measure Precision,
Recall, Accuracy, and ……….…….................
C. State whether these statements are true or false.
1. Accuracy refers to the percentage of incorrect predictions made by the model. ……….……................
2. Error is the difference between the predicted value and the actual outcome. ……….……................
3. In Train-test split, the training subset is used to make the model learn patterns from the data,
comprising 50% to 60% of the dataset. ……….……................
4. In underfitting, the model is too complex and performs poorly on both training and test data. ……….……................
5. Model evaluation is a process that critically examines a model to assess its performance. ……….……................
D. Match the following:
1. Classification a. Error Matrix
2. Confusion Matrix b. Type 2 Error
3. False Positive c. Classification Model
4. False Negative d. Supervised Learning
5. F1 Score e. Type 1 Error
SECTION B (Subjective Type Questions)
A. Short answer type questions.
1. Why is it important to maintain a balance between bias and variance in a machine learning model?
Ans. It’s important to maintain a balance between bias and variance in machine learning model to ensure the model performs
consistently on both training and test data.
2. Where should we use recall?
Ans. Recall is generally used for unbalanced dataset, when dealing with the False Negatives become important and the
model needs to reduce the FNs as much as possible.
3. What is the primary benefit of using the Train-Test Split technique in model evaluation?
Ans. The primary benefit of using the Train-Test Split technique in model evaluation is that it gives an unbiased estimate of
model performance on new data.
4. What is the significance of the False Negative (FN) in a confusion matrix?
Ans. A False Negative (FN) indicates that the model incorrectly predicted a negative outcome, even though the actual
outcome was positive. It can be critical in scenarios like medical diagnosis.
5. How does classification accuracy differ when the dataset is unbalanced?
Ans. When the dataset is unbalanced, classification accuracy can be misleading, as the model may predict the majority class
correctly but fail on the minority class.
242 Touchpad Artificial Intelligence (Ver. 3.0)-X

