First of all, I need to use a permutation test to calculate the significance of some data. When I do the permutation test, I need to shuffle the data and compare the original one and the shuffled one.
Then the problems raises, in some scenarios, I found that in what way to compare would affect significantly. For example, I count the number of situations when the original one <= the shuffled data. But if I count the number of situations when the original one < the shuffled data. It would be very different.
Could anyone tell me the descent and robust way to perform the permutation test? Mostly how to compare the original and the shuffled one.
Or what kind of circumstance would encounter this problem?
updated parts
Thank @Glen_b and @jbowman comments. Maybe I should implement more here.
First, it should be a one-tailed test. The hypothesis is that they show no differences between the shuffled data and the original data.
More specifically, supposing we have a network/graph and each node has an attribute, such as age. The network has 5000+ nodes. Depending on their connectivities, we want to define/calculate the degree of enrichment locally.
To do that, I compare the summed attributes from a specific region and the summed attributes after shuffling the data.
Taken together, I use the permutation test to find out whether it is an enrichment region.