Page 165 - Ai_C10_Flipbook
P. 165

12.  How is the relationship between model performance and accuracy described?        [CBSE Handbook]
                       a.  Inversely proportional                        b.  Not related
                       c.  Directly proportional                         d.  Randomly fluctuating

                 B.  Fill in the blanks.
                    1.  ………………………. occurs when a model performs well on the training data but poorly on test data because it memorizes
                       the training data instead of generalizing.
                    2.  ………………………. is prioritized over precision when false negatives are more costly than false positives.
                    3.  A Confusion Matrix is a ………………………. structure that helps in measuring the performance of an AI model using the
                       test data.
                    4.  The target variable in a confusion matrix has two values Positive and ………………………..
                    5.  The rows (x-axis) in the confusion matrix represent the ………………………. values of the target variable.
                    6.  The F1 score is a number between 0 and 1 and is the harmonic mean of ………………………. and recall.
                    7.  The  ideal  scenario  is  called  ……………………….,  where  the  model  strikes  the  right  balance  between  complexity  and
                       simplicity, performing well on both training and test data.
                    8.  The Train-test split technique is used to evaluate the performance of the model by dividing the dataset into two
                       subsets: a ………………………. subset and a ………………………. subset.
                    9.  Low error signifies precise and ………………………. predictions.
                    10.  The model must achieve a balance of ………………………. and ………………………. to perform optimally in real-world scenarios.

                 C.  Match the following:
                    1.  Training Set                     a.  Used to evaluate  model performance
                    2.  Testing Set                      b.  Used to train the model
                    3.  Overfitting                      c.  Model generalizes poorly to new data
                    4.  Underfitting                     d.  Model fails to capture underlying patterns

                 D.  State whether these statements are true or false.
                    1.  Accuracy is always the best metric to evaluate a classification model.                    ……….……
                    2.  A high recall means the model correctly identifies most of the actual positive cases.     ……….……

                    3.  The training set is used to evaluate the model’s performance on unseen data.              ……….……
                    4.  The F1 Score is the harmonic mean of Precision and Recall.                                ……….……
                    5.  Overfitting occurs when a model performs well on the training data but poorly on new data.   ……….……

                                                  SECTION B (Subjective Type Questions)

                 A.  Short answer type questions.
                    1.  What does classification refer to in machine  learning?
                    2.  Can we use  Accuracy all the time?
                    3.  What is Precision  in model  evaluation, and why is it important in spam detection?
                    4.  How does the F1 Score help in evaluating a classification model?
                    5.  Why is it important  to split  the dataset into  training  and  testing  sets?
                 B.  Long answer type questions.

                    1.  Explain  Accuracy, Precision, Recall, and F1 Score with examples. When  should  each metric be used?
                    2.  Which metric is more important—Recall or Precision? Explain in detail.
                    3.  Describe  the concept of overfitting and underfitting. How can they  impact model evaluation, and how can we prevent them?
                    4.  Why is model evaluation important in machine learning? Discuss different techniques used to evaluate classification
                       models.


                                                                                           Evaluating Models    163
   160   161   162   163   164   165   166   167   168   169   170