Whenever a result is positive in a study, authors always give the p-value, which I understand as following :
A really is higher than B, and I have 0.00015 probability of beeing wrong saying this.
Genuinely, I'd like to give an anti-p-value when I give a negative result, which could be understood as following :
A is not found higher than B, and we had only 0.00015 probability of missing a difference higher than 1 between those two
As far as I understand, you can calculate this probability just like the power of the test, with the variance and length of the sample, which you always can compute, and if the delta (1 in my example) is relevant, the information given here can somehow be important.
Why is this never done though ?
PS : Of course as cleverly explained here, this is not to be used as an argument to say that there could be a difference even if not seen here. But so far I can't see why he is so angry with power.