Page 318 - AI Ver 1.0 Class 10
P. 318
In the above Scenario of Board Exams prediction:
If the model always predicts All Positives that there will ALWAYS BE A BOARD EXAM irrespective of the reality.
It would take into consideration all the Positive conditions, which are True Positive (Prediction = Yes and
Reality = Yes) and False Positive (Prediction = Yes and Reality = No). Here students would always be on their
toes and vigilant to find out if the Board Exams will happen or not, and keep verifying if the prediction is TRUE
or FALSE.
Importantly, if the Precision is less, that is, if there are more False Predictions, the students might become laid back,
and might not check it more often, considering that the board exams will not happen. That is why Precision of the
model is an important aspect for evaluation. So, if the Precision is more, that would mean that False Positive cases
are less than the True Positive cases.
So if the model is 100% precise, that would mean that whenever the model says there are exams happening (True
Positive), the exams would definitely happen. There can be rare exceptional situations where the model would not
be able to predict the exams, but the exams are there (False Negative). In this case the Precision value does not
get affected, as the False Negative is not considered by the model for the evaluation. Which raises a question: Is
Precision a good parameter for performance of the model?
Task
Now, find out is good Precision equivalent to a good model performance?
_______________________________________________________________________________________________________________________________
_______________________________________________________________________________________________________________________________
_______________________________________________________________________________________________________________________________
_______________________________________________________________________________________________________________________________
_______________________________________________________________________________________________________________________________
Recall/Sensitivity
Recall is defined as the fraction of positive cases that are correctly identified. It majorly takes into account the true
reality cases i.e.; it is a measure of our model correctly identifying True Positives. Its formula is:
True Positive
Recall =
True Positive + False Negative
TP
Recall =
TP + FN
Which metric is more important— Recall or Precision?
• Cases where cost of False Negative > False Positive:
✶ Can be riskier and more unfavourable: As nobody expected Board exams and there was no Prediction also but
still Board Exams were conducted.
✶ Can be costly and harmful: When no rain is predicted and the farmers did not start the harvest as they
thought that since no rain is predicted so their wheat crop which is ready to harvest can be done in a day or
two. The model that was supposed to predict the rain but did not do so spoiled the whole crop.
316 Touchpad Artificial Intelligence-X

