When attempting to construct a classifier for a somewhat imbalanced data set, I was led to the question of measuring the performance of the classifier. One of the first things I thought of was to take the average of precision and specificity:
$$\frac{(\text{true positive rate} + \text{true negative rate)}}{2}$$
It seems quite an intuitive measure to me. Moreover one can take a weighted average depending on how important the prediction rate for each class is. Also it generalizes in a straightforward way to multiple classes. One can incorporate it into a 2-class cost matrix by setting the off diagonals to:
$$\frac{(N-N_1)}{N}, \frac{(N-N_2)}{N}$$
However, I could not find any reference online to any such measure - there appears to be no name for it! There are all sorts of combinations of other measures, but not this one. I was wondering if someone can point me to any discussion of it.
Edit: I originally had misunderstood the definition of specificity as $TN \over TN+FN$ when actually it is defined as $TN \over TN+FP $. I have corrected the title. As far as I can find, there is no name for "true-negative rate".