Page 243 - AI Ver 3.0 class 10_Flipbook
P. 243

7.  What does "Error" refer to in model evaluation?
                       a.  The difference between the model’s prediction and the actual outcome

                       b.  The total number of correct predictions

                       c.  The total number of false positives in the model
                       d.  The percentage of false negatives in the model

                    8.  When is accuracy an appropriate metric to use?
                       a.  When the dataset is highly unbalanced with a significant difference between positive and negative classes.

                       b.  When the dataset is balanced, and both positive and negative classes are nearly equal.
                       c.  When precision is the most important factor.

                       d.  When recall is the most important factor.

                    9.  What does the classification accuracy of a model indicate?
                       a.  The ability of the model to classify negative cases
                       b.  The number of false positives in the dataset

                       c.  The proportion of incorrect predictions

                       d.  The percentage of correct predictions out of total predictions
                    10.  Which metric is used to reduce the no. of false positives and false negatives?

                       a.  Accuracy                                      b.  F1 - Score
                       c.  Precision                                     d.  Recall

                    11.  A student solved 90 out of 100 questions correctly in a multiple-choice exam. What is the error rate of the student's
                       answers?                                                                           [CBSE Handbook]
                       a.  10%                                           b.  9%
                       c.  8%                                            d.  11%

                    12.  Calculate the F1 score of the model, when a model correctly predicts 120 positive sentiments out of 200 positive
                       instances. However, it also incorrectly predicts 40 negative sentiments as positive.

                       a.  0.8                                           b.  0.67
                       c.  0.72                                          d.  0.82

                 B.  Fill in the blanks.
                    1.  The evaluation technique that involves dividing the dataset into training and testing subsets is called ……….…….................

                    2.  In an ……….……................ scenario, the model performs poorly on both training and test datasets because it is too simple
                       to capture the underlying patterns.
                    3.  The F1-Score is calculated as the harmonic mean of ……….……................ and ……….……................ .

                    4.  A good F1 score means that you have low false positives and low ……….……................ negatives.
                    5.  Accuracy is the evaluation metric used to measure the ……….……................ of predictions made by the model.

                    6.  Overfitting occurs when the model is too ……….……................, performing well on training data but poorly on test data.
                    7.  The confusion matrix consists of 4 different combinations: True Positive (TP), True Negative (TN), False Positive (FP), and
                       ……….…….................

                                                                                           Evaluating Models    241
   238   239   240   241   242   243   244   245   246   247   248