Questions tagged [accuracy]

Accuracy of an estimator is the degree of closeness of the estimates to the true value. For a classifier, accuracy is the proportion of correct classifications. (This second usage is not good practice. See the tag wiki for a link to further information.)

Accuracy of an estimator is the degree of closeness of the estimates to the estimand. Accuracy of a forecast rule is the degree of closeness of the forecasts to the corresponding realization. Accuracy can be contrasted to precision; accuracy is about bias while precision is about variability.

Given a set of estimates or forecasts, the estimator or the forecast rule that have generated them can be said to be accurate if the average of the set is close to the estimand or the realization, respectively. Meanwhile, the estimator or the forecast rule can be said to be precise if the values are close to each other (little scattered). The two concepts are independent of each other, so a particular estimator or forecast rule be either accurate, or precise, or both, or neither. Although the two words, precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.

For example, lack of accuracy (large bias) may result from a systematic error. Eliminating the systematic error improves accuracy but does not change precision. Meanwhile, lack of precision (large variability) may result from a small sample on which the estimation or forecasting is based. Increasing the sample size alone may improve precision but not accuracy.

Statistical literature may prefer the terms bias and variability instead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision.

(Loosely based on Wikipedia's article "Accuracy and precision".)

Accuracy is not a good performance measure for classifiers.

830 questions
6
votes
3 answers

Accuracy of model lower than no-information rate?

I have a dataset with predicted variable having two classes; true and false. 99.99% of the values are with false class. In this case, no-information rate is 99.99%. So, any model that I build needs to have an accuracy higher than the no-information…
add787
  • 223
3
votes
2 answers

Measures of accuracy for discrete variables

I am testing the accuracy of discrete variable prediction (>= 2 possible outcomes). I've seen things like using a confusion matrix or ROC curve for binary outcomes, but not much for > 2 outcome variables. What are good measures of accuracy for…
sma
  • 233
1
vote
1 answer

Accuracy measure for model which sometimes produces no prediction

I have a model which produces no output for some inputs. What's a reasonable way to measure the performance of the model against a data set, taking the "missing output" into consideration? And is there a sane way to compare this performance to a…
0
votes
1 answer

How do I compute the accuracy in groups?

A paper Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration performed an experiment and the result is Retinal specialists could not distinguish real from synthetic images,…
WXJ96163
  • 297
0
votes
0 answers

Compare two performance scores, each with a different baseline

Once more I rushed into running a promising experiment, without spending enough thought on how I would analyze the data. Now that I am finished, I have two conditions for which I'd like to compare accuracy scores and as it turns out, each of the…
userE
  • 208