3

A study I’ve read shows that all of the results relevant to me were at >0.1 p value. Does it mean that the study used a <0.1 p value as significant? Is there still at possibility the study choose <0.05 as a significant p value, but just wrote the p values as >0.1 because they not only was higher than 0.05, but also 0.1?

  • The authors chose the p-value threshold and it should be written down somewhere in the study, find that number. It is expected of the authors to report p-values for all the tested hypotheses regardless of significance. – user2974951 Mar 19 '24 at 09:16
  • The problem is, they didn’t write the threshold. Do you know anything there could indicate that they still used 0.05 p value threshold? Is the use of 0.1 p value uncommon, esp. in the 1950s? Have u seen studies before showing something similar? Anything? – Emil Kristensen Mar 19 '24 at 09:19
  • If that is true then you should throw that study in the trash because this is critical information which anyone who knows anything about hypothesis testing would report. Or you could just assume they used $\alpha=0.05$. – user2974951 Mar 19 '24 at 09:20
  • 1
    "A study I’ve read shows that all of the results relevant to me were at >0.1 p value. " This is a bit vague. Did they use to report p-values like that or is it your own wording to use ">0.1 p value"? And what do you mean with "the results relevant to me"? In what way are those results relevant to you? Could you provide a more exact quotation of the text used in the study. – Sextus Empiricus Mar 19 '24 at 13:47

1 Answers1

4

Many studies, in biology at least, do not state upfront and before data collection the level of significance for their analyses. Instead, you collect some data, usually under time and/or budget constraints, and test some null hypothesis for differences between groups (or test correlations, coefficients, whatever). The result is typically a p-value as continuous measure of evidence against the null hypothesis and it doesn't make sense to state a threshold for significance. I believe this falls under the Fisher's use of p-values as opposed to Neyman-Pearson's framework (see also When to use Fisher versus Neyman-Pearson framework?).

This may also be the case for the study you mention. So there is no point in asking about the level of significance they choose. As to why they report >0.1 instead of the actual value, we can't tell for sure but in practice it shouldn't make a lot of difference since p > 0.1 most likely indicates that the observed data is not at all unusual in a world where the null hypothesis is correct.

dariober
  • 4,250