I was reading a paper that says they use an "ANOVA permutation test" to test if there is a difference in the (in-)degree distribution among groups of nodes in a network. I Googled this and found a resource (here) on how that works.
In the example in the resource, when they see a significant p-value in the independence_test, they follow up with a pairwisePermutationTest for multiple comparisons. Their results:
independence_test(Response ~ Factor,
data = Data)
Asymptotic General Independence Test
maxT = 3.2251, p-value = 0.005183
So they have a statistically significant p-value here, indicating (I think?) that there is at least one group median that is different from the other group medians.
They follow this up with a pairwise test to determine which medians are significantly different, and obtain these results:
Comparison Stat p.value p.adjust
1 D - A = 0 -0.2409 0.8096 0.80960
2 D - B = 0 -2.074 0.03812 0.06106
3 D - C = 0 -2.776 0.005505 0.01876
4 A - B = 0 1.952 0.05088 0.06106
5 A - C = 0 2.734 0.006253 0.01876
6 B - C = 0 1.952 0.05088 0.06106
As far as I can tell, this implies that the median of group D is significantly smaller than the median of group C, and the median of group A is significantly larger than the median of group C. No other pairwise differences are statistically significant.
Question:
With my data, even though the independence test has a statistically significant p-value, the follow-up pairwise permutation test shows no statistically significant differences. Why could this be?
