Page 245 - AI Ver 3.0 class 10_Flipbook
P. 245

B.  Long answer type questions.

                    1.  Explain the concept of a confusion matrix and its components. How is it used to evaluate a classification model?
                   Ans.  A confusion matrix is a performance evaluation tool used in machine learning to summarieshe performance of a
                       classification model. It is a tabular representation that compares the actual labels (true outcomes) with the predicted
                       labels (model predictions). The table is made with 4 different combinations of predicted and actual values in the form
                       of 2×2 matrix.
                       To understand the confusion matrix, let’s understand the following terms:
                       •  Positive: The prediction is positive for the scenario. For example, if there will be snowfall.
                       •  Negative: The prediction is negative for the scenario. For example, there will be no snowfall.
                       •   True  Positive:  The  predicted  value  matches  the  actual  value  i.e.;  the  actual  value  was  positive  and  the  model
                         predicted a positive value.
                       •   True Negative: The predicted value matches the actual value i.e.; the actual value was negative and the model
                         predicted a negative value.
                       •   False Positive (Type 1 error): The predicted value was falsely predicted i.e.; the actual value was negative but the
                         model predicted a positive value.
                       •   False Negative (Type 2 error): The predicted value was falsely predicted i.e.; the actual value was positive but the
                         model predicted a negative value.

                    2.   Lists the different evaluation model.
                   Ans.  Evaluation techniques involve assessing a machine learning model’s performance on training and test data.
                       The description of these evaluation models is as follows:
                       •   Overfitting Model: The model (red curve) fits the training data perfectly, including noise, but performs poorly
                         on the testing data, leading to poor generalisation. In overfitting, the model is too complex and performs well on
                         training data but poorly on test data. It has low bias and high variance. The model memorizes the training data but
                         struggles to generalize to new, unseen data.
                       •   Underfitting Model: The model (purple line) is too simplistic, failing to capture the pattern in both the training and
                         testing data. It has high bias and low variance. The model fails to capture the underlying patterns in the data.
                       •   Perfect Fit Model: The model (green curve) balances complexity and generalisation, fitting the training data well
                         and performing well on the testing data. It performs well on both training and test data and generalizes effectively
                         to new data.
                       •   Model  Selection:  Splitting  helps  compare  models  and  choose  the  best  one  based  on  performance  on  the
                         testing set.
                    3.  What is the purpose of using Precision and Recall together when evaluating a classification model, and how does the
                       F1 Score help in balancing them?

                   Ans.  Precision and Recall are used together to evaluate how well a model handles both false positives and false negatives.
                       Precision focuses on how many of the predicted positive cases were actually positive, while Recall measures how many
                       of the actual positive cases were correctly identified by the model.
                       The F1 Score is the harmonic mean of Precision and Recall. It helps balance these two metrics by giving a single value
                       that combines both, making it particularly useful when the costs of false positives and false negatives are important
                       and need to be minimized equally. F1 Score is often preferred when there's a need to strike a balance between Precision
                       and Recall rather than focusing on one over the other.
                    4.  What do you understand by accuracy and error in evaluation metrics?
                   Ans.  The term Accuracy is defined as the evaluation metric that measures the total number of predictions that are correct
                       by the model. It means how close the prediction is to the true value. The accuracy of the model and the performance
                       of the model is directly proportional, that means better the performance of the model, higher is the accuracy of the
                       predictions.

                                                                                           Evaluating Models    243
   240   241   242   243   244   245   246   247   248   249   250