Page 237 - AI Ver 3.0 class 10_Flipbook
P. 237
Step 3 Now calculate the accuracy from this matrix.
Predicted Values
Yes No
Actual Values Yes No TP=900 FN=0
FP=100
TN=0
Correct predictions
Classification accuracy = × 100
Total predictions
TP+TN
= × 100
TP+TN+FP+FN
900+0
= × 100
900+0+100+0
= 90%
So, the faulty model is showing an accuracy of 90%. Does this make sense? So, in cases of unbalanced data, we
should use other metrics such as Precision, Recall or F1 score.
Precision
Precision is the ratio of True Positive cases to All predicted positive cases.
No. of correct positive predictions
Precision =
Total no. of positive predictions
TP
=
TP+FP
Total positive predictions = True Positive (TP) + False Positive (FP)
In the above snowfall prediction example:
If the model always predicts All as Positives, then there will always be a snowfall irrespective of the reality.
It would take into consideration all the Positive conditions, which are True Positive (Prediction = Yes and
Actual = Yes) and False Positive (Prediction = Yes and Actual = No). Here residents would always be anxious to find
out if there will be snowfall or not and keep verifying if the prediction is TRUE or FALSE.
Importantly, If False Positives are significantly higher than True Positives, then Precision will be low, if there are
more False Predictions, the residents might become laid back, and might not check it more often, considering that
the snowfall will not happen.
Thus, Precision of the model is an important aspect for evaluation. So, if the Precision is more, that would mean
that False Positive cases are less than the True Positive cases.
So, if the model is 100% precise, it means that whenever the model predicts a snowfall (True Positive), the snowfall
would definitely happen. There can be rare exceptional situations where the model would not be able to predict
the snowfall, but the snowfall is there which will be a case of False Negative. In this case the Precision value does
not get affected, as the False Negative is not considered by the model for the evaluation. Which raises a question:
Is Precision a good parameter for performance of the model?
Evaluating Models 235

