Is there a "soft" version for the ye-olde precision and recall metrics? Precision (and recall) are defined given binary decisions, i.e.
precision=sum(marked_as_positive* is_positive)/sum(marked_as_positives)
Where marked_as_positive equals 0 or 1. Has anyone encountered a version that use probabilities instead of binary decisions, i.e.
sum(P(is_positive)*is_positive)/sum(P(is_positive))
Where P(is_positive) is between 0 to 1 and represents the probability that a given sample is positive as assigned by some classifier?
I'm aware of logloss, AUC and similar "soft" metrics, but for some reason never encountered the one above - which makes me suspect that there's something very wrong with using it.