(stats newbie here)
The highlighted answer to Can you compare p-values of Kolmogorov Smirnov tests of normality of two variables to say which is more normal? refers to a paper* that says that when the null hypothesis is true, then the p-value is a random variable, i.e. a p-value of 0.9 is just as likely as 0.1.
For my data set (wind speeds, 72 data points), I wanted to find the distribution that fit my data best and so ran the Kolmogorov Smirnov test for a number of distributions. Here I will just mention the Normal and Weibull. I find that the null hypothesis is not rejected for both and that p = 0.79 for Normal and p = 0.69 for Weibull.
To test out the suggestion from the answer referred to above I then randomly sampled the data set (50 samples) and ran the Kolmogorov Smirnov Test 1000 times. I find that the normal p-value > weibull p-value ~55% of the time.
So this seems to say to me that yes, we cannot take the "magnitude" of a p-value seriously in this case. All we can do is say neither normal nor weibull are rejected, therefore we must determine a better distribution another way.
But could there be a case made for a situation in which under repeated subsampled runs the p-value for one distribution was always (e.g. 99%) larger than another? e.g. if normal p was > weibull p 99% of the time.
Thank you in advance. I have been looking at a lot of similar questions today but I really wanted to try to argue this out and understand this properly...
*Murdock, D, Tsai, Y, and Adcock, J (2008) P-Values are Random Variables. The American Statistician. (62) 242-245.