1

Whenever a result is positive in a study, authors always give the p-value, which I understand as following :

A really is higher than B, and I have 0.00015 probability of beeing wrong saying this.

Genuinely, I'd like to give an anti-p-value when I give a negative result, which could be understood as following :

A is not found higher than B, and we had only 0.00015 probability of missing a difference higher than 1 between those two

As far as I understand, you can calculate this probability just like the power of the test, with the variance and length of the sample, which you always can compute, and if the delta (1 in my example) is relevant, the information given here can somehow be important.

Why is this never done though ?

PS : Of course as cleverly explained here, this is not to be used as an argument to say that there could be a difference even if not seen here. But so far I can't see why he is so angry with power.

Dan Chaltiel
  • 1,430
  • 2
    "...and I have 0.00015 probability of being wrong saying this" is incorrect, as is the parallel construction about "anti-p-values," because the calculation of both probabilities assumes knowledge of the true state of the world, which you don't have. I suspect that by finding out why these statements are wrong (which you can do by searching this site for material on hypothesis tests and p-values), you will obtain answers to your question. – whuber Nov 10 '17 at 17:54

1 Answers1

4

It's a common misconception that $p$ is the probability that your research hypothesis is wrong. Rather, it's the probability of getting results at least as extreme as the sample assuming the null hypothesis is true.

Relatedly, you can't compute the power of a test of real data, because power depends on the population effect, and if you knew the population effect, you wouldn't need to collect and analyze data to begin with. The most you can do is answer questions such as: if the population effect were a difference of 5 units (and the model were correct), what would be the probability of rejecting the null hypothesis with $α = .05$ and a sample size of 100? This kind of computation is what we call a power analysis.

Kodiologist
  • 20,116
  • Yeah, but isn't this probability as valable than pvalue ? – Dan Chaltiel Nov 10 '17 at 22:08
  • @DanChaltiel In practice, I don't think $p$-values are very valuable, because the null hypothesis can be safely assumed to be false a priori, and power analyses are also of dubious value because you don't know the population effect. So I suppose they're equally valuable, but only because neither is worth much. – Kodiologist Nov 10 '17 at 22:32
  • This is confusing, because as a scientist I've been taught to use p-value like this. And I have the feeling that the scientific community is in the same case. Why is p-value so much used in scientific papers if so worthless ? – Dan Chaltiel Nov 13 '17 at 08:58
  • @DanChaltiel See for example this answer. – Kodiologist Nov 13 '17 at 15:23