You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey I was just playing around with this and had a couple questions:
Why do we extend the recall and precision between 0.0 and 1.0? Is there an assumption you will always have a value very close to these numbers and that's the issue here?
Could we not just use the minimum and maximum recall for np.linspace(0, 1, 101)?
Having had a few conversations and looking around, we've decided to keep the current method. Prepending the 0 and 1 for precision and recall seems to be the standard approach when implementing the COCO 101 mAP algorithm, and we wouldn't like to gravitate away from that.
Search before asking
Bug
I have precision and recall curve as below.
So, I can plot PR curve as below.
However, according to
compute_average_precision
method, always pad0.0
and1.0
to precision and recall vectors.supervision/supervision/metrics/detection.py
Lines 727 to 749 in 4729e20
I think these extensions are dangerous and makes pr curve inaccurate. Below is curve of
interpolated_precision
andinterpolated_recall_levels
.The bent part of right bottom in the plot seems unnatural and effects on the accuracy of the average precision calculation.
Environment
No response
Minimal Reproducible Example
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: