I have a set of annual rainfall data for Thailand which is gridded, so I have approximately 30x18 grid squares. I am trying to test whether the gamma distribution is suitable for my data, so I am doing the Lilliefors test on the rainfall data from each grid square. I appear to have implemented this fine in Matlab. The problem I am having is trying to understand my results. Here is an example from one grid square:
Test statistic = 0.0782
Critical value (1% significance level) = 0.0958
Critical value (5% significance level) = 0.0811
Critical value (10% significance level) = 0.0738
So from this, I think I reject the null hypothesis at the 10% level, but not at the 1% & 5% levels (as the null hypothesis is rejected if the test statistic exceeds the critical value).
Thing is, I don't really understand how I can accept something at the 1% and 5% level but not the 10% level. I am really not a statistician, so I do struggle to get my head around this stuff. I do tests for statistical significance in my work, where something can be statistically significant at the 10% level but not at the 5% or 1% level, so I think this is why I can't understand it being the other way around.
What does my result say about the goodness of fit using the gamma distribution for my data?