19

Is there a word that means the 'inverse of variance'? That is, if $X$ has high variance, then $X$ has low $\dots$? Not interested in a near antonym (like 'agreement' or 'similarity') but specifically meaning $1/\sigma^2$?

Tim
  • 138,066
Hugh
  • 609
  • 2
    Agreement and similarity are in any case pretty much preempted, at least in formal definitions, for pairwise and other comparisons. However, that doesn't rule out informal talk, e.g. you can see from the low variance that different measurements tend to agree – Nick Cox Nov 25 '15 at 11:41
  • 1
    I added [bayesian] tag since, as you can see from my answer and comments, the answer is closely related to Bayesian statistics and it will be easier to find tagged like this. – Tim Dec 02 '15 at 08:33

1 Answers1

33

$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became popular because gamma can be used as a conjugate prior for precision in normal distribution as noticed by Kruschke (2014) and @Scortchi.


Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press, p. 454.

Tim
  • 138,066
  • 5
    (+1) I don't recall seeing it outside of this context, where it's convenient to be able to say things like "add the precisions of the prior & the data to get the precision of the posterior", & to use the familiar gamma distribution as a conjugate prior for precision. – Scortchi - Reinstate Monica Nov 25 '15 at 11:45
  • 5
    Also common in multivariate settings, where the precision becomes the inverse of the covariance matrix. (And again, is useful as a parameter for a normal distribution when you need a conjugate prior). – Peter Bloem Nov 25 '15 at 21:02
  • 4
    Yes, precision matrix is very helpful when dealing with multivariate Gaussians (as @Peter said), e.g. the formulas for conditional distributions are simpler in terms of precision matrices. Bishop spends many pages describing how this works in the Chapter 3 of his Pattern Recognition and Machine Learning, and this then reappears many times throughout the book. – amoeba Nov 25 '15 at 21:17