Page 154 - Ai_C10_Flipbook
P. 154
Step 2 Construct the confusion matrix.
So, the faulty model will predict all the 1000 input data as Yes.
Consider Yes as the positive class and No as the negative class. Construct the confusion matrix from the
Actual vs Predicted table.
Predicted Values
Yes No
Actual Values
Predicted Value Actual value Yes TP= FN=
Yes=1000 Yes=900
No=0 No=100 No FP= TN=
Step 3 Now calculate the accuracy from this matrix.
Correct predictions
Classification accuracy = × 100
Predicted Values Total predictions
Yes No
TP+TN × 100
=
Actual Values Yes No TP=900 FN=0 = 900+0+100+0 × 100
TP+TN+FP+FN
900+0
TN=0
FP=100
= 90%
So, the faulty model is showing an accuracy of 90%. Does this make sense? So, in cases of unbalanced data, we
should use other metrics such as Precision, Recall or F1 score.
Precision
Precision is the ratio of True Positive cases to All predicted positive cases.
No. of correct positive predictions TP
Precision = ⇒ =
Total no. of positive predictions TP+FP
Total positive predictions = True Positive (TP) + False Positive (FP)
In the above snowfall prediction example:
If the model always predicts All as Positives, then there will always be a snowfall irrespective of the reality.
It would take into consideration all the Positive conditions, which are True Positive (Prediction = Yes and
Actual = Yes) and False Positive (Prediction = Yes and Actual = No). Here residents would always be anxious to find
out if there will be snowfall or not and keep verifying if the prediction is TRUE or FALSE.
Importantly, If False Positives are significantly higher than True Positives, then Precision will be low, if there are
more False Predictions, the residents might become laid back, and might not check it more often, considering that
the snowfall will not happen.
Thus, Precision of the model is an important aspect for evaluation. So, if the Precision is more, that would mean
that False Positive cases are less than the True Positive cases.
So, if the model is 100% precise, it means that whenever the model predicts a snowfall (True Positive), the snowfall
would definitely happen. There can be rare exceptional situations where the model would not be able to predict
the snowfall, but the snowfall is there which will be a case of False Negative. In this case the Precision value does
not get affected, as the False Negative is not considered by the model for the evaluation. Which raises a question:
Is Precision a good parameter for performance of the model?
152 Artificial Intelligence Play (Ver 1.0)-X

