We want to know whether two sample sets {x} and {y} were drawn from the same distribution. The null hypothesis $H_0$ is that they are. As statisticians we test the hypothesis by calculating the p-value: the probability of observing those sample data if the null hypothesis is correct.
As statisticians with computers we can avoid thinking too hard about the characteristics of their distribution and just run a permutation test on whatever statistic we care about. Well, we said earlier that we care about whether they are different. So I guess we have to come up with some MLE $\hat{\theta}$, and then look at the difference: $\hat{\theta_x}-\hat{\theta_y}$, right?
But we've been reading all about Likelihood Ratio Tests, which have some nice features, and a difference is just a ratio in log space, so: are there advantages to running the permutation test on the ratio $\frac{\hat{\theta_x}}{\hat{\theta_y}}$ instead of the difference? IIUC, using the ratio gets us into a situation, via Wilks' theorem, where we approximate the $\chi^2$ distribution. But we already decided our computer can do a permutation test – potentially even exhaustively for small sample sizes – and isn't that preferable in practice to a closed-form solution that is only asymptoticly correct?
Or maybe we pick up some power if we mix LRT with the permutation concept and look at $\frac{\hat{\theta_{xy}}}{\hat{\theta_x}\hat{\theta_y}}$?