Page 158 - Ai_C10_Flipbook
P. 158

Exercise




                                                       Solved Questions


                                               SECTION A (Objective Type Questions)
                     uiz

              A.  Tick ( ) the correct option.
                  1.  Why is it essential  to evaluate a machine  learning model using evaluation techniques such as  train-test split?
                    a.  To increase  the complexity of the model
                    b.  To reduce the size of the training dataset
                    c.  To eliminate the need for testing data
                    d.  To assess how well the model performs on unseen data

                  2.  Which of the following describes an overfitting scenario in model evaluation?
                    a.  The model performs poorly on both training and test data.
                    b.  The model performs well on both training and test data.
                    c.  The model performs well on training data but poorly on test data.
                    d.  The model performs poorly only on test data but memorizes random noise from training data.

                  3.  What does a "perfect fit" represent in model evaluation?
                    a.  The ideal balance between complexity and generalisation
                    b.  High bias and low variance
                    c.  Low bias and high variance
                    d.  Overfitting the training data
                  4.  Which metric measures how many positive predictions made by the model are actually correct?

                    a.  Recall                                         b.  Precision
                    c.  Accuracy                                       d.  F1-Score

                  5.  Which term refers to the actual value being positive, but the model predicting it as negative?
                    a.  True Positive                                  b.  False Positive
                    c.  False Negative                                 d.  True Negative

                  6.  What is a False Positive (FP)?
                    a.  When the model incorrectly predicts a positive value when the actual value is negative
                    b.  When the model correctly predicts a positive value
                    c.  When the model incorrectly predicts a negative value when the actual value is positive
                    d.  When the model correctly predicts a negative value

                  7.  What does "Error" refer to in model evaluation?
                    a.  The difference between the model’s prediction and the actual outcome
                    b.  The total number of correct predictions
                    c.  The total number of false positives in the model
                    d.  The percentage of false negatives in the model





                    156     Artificial Intelligence Play (Ver 1.0)-X
   153   154   155   156   157   158   159   160   161   162   163