Page 250 - AI Ver 3.0 class 10_Flipbook
P. 250

8.  The Train-test split technique is used to evaluate the performance of the model by dividing the dataset into two
                    subsets: a ………………………. subset and a ………………………. subset.

                  9.  Low error signifies precise and ………………………. predictions.

                 10.  The model must achieve a balance of ………………………. and ………………………. to perform optimally in real-world scenarios.
              C.  Match the following:

                  1.  Training Set                     a.  Used to evaluate  model performance
                  2.  Testing Set                      b.  Used to train the model
                  3.  Overfitting                      c.  Model generalizes poorly to new data

                  4.  Underfitting                     d.  Model fails to capture underlying patterns

              D.  State whether these statements are true or false.
                  1.  Accuracy is always the best metric to evaluate a classification model.                   ……….……
                  2.  A high recall means the model correctly identifies most of the actual positive cases.    ……….……
                  3.  The training set is used to evaluate the model’s performance on unseen data.             ……….……

                  4.  The F1 Score is the harmonic mean of Precision and Recall.                               ……….……
                  5.  Overfitting occurs when a model performs well on the training data but poorly on new data.   ……….……

                                               SECTION B (Subjective Type Questions)

              A.  Short answer type questions.
                  1.  What does classification refer to in machine  learning?
                  2.  Can we use  Accuracy all the time?
                  3.  What is Precision  in model  evaluation, and why is it important in spam detection?

                  4.  How does the F1 Score help in evaluating a classification model?
                  5.  Why is it important  to split  the dataset into  training  and  testing  sets?

              B.  Long answer type questions.
                  1.  Explain  Accuracy, Precision, Recall, and F1 Score with examples. When  should  each metric be used?

                  2.  Which metric is more important—Recall or Precision? Explain in detail.
                  3.  Describe  the concept of overfitting and underfitting. How can they  impact model evaluation, and how can we prevent
                    them?

                  4.  Why is model evaluation important in machine learning? Discuss different techniques used to evaluate classification
                    models.
              C.  Competency-based/Application-based questions.                                 21 st  Century   #Critical Thinking
                                                                                                   Skills
                  1.  A credit scoring model is used to predict whether an applicant is likely to default on a loan (1) or not (0). Out of 1000
                    loan applicants:                                                                    [CBSE Handbook]
                     True Positives(TP): 90 applicants were correctly predicted to default on the loan.
                     False Positives(FP): 40 applicants were incorrectly predicted to default on the loan.
                     True Negatives(TN): 820 applicants were correctly predicted not to default on the loan.
                     False Negatives (FN): 50 applicants were incorrectly predicted not to default on the loan.

                     Calculate metrics such as accuracy, precision, recall, and F1-score.

                    248     Touchpad Artificial Intelligence (Ver. 3.0)-X
   245   246   247   248   249   250   251   252   253   254   255