I've been trying to get my head around the likelihood ratio test, and I think I understand now how it all works, roughly. But I've read a few posts showing how you can derive the t-test formula via a likelihood test. However, I was wondering, is it possible to precisely reproduce the p-value of either a t- or z-test using a likelihood ratio test? My attempt to do so is below, but gets different (though similar) numbers for each. Is there any way I can correct this to make the numbers tie out, or do the approximations made in the likelihood-ratio test mean that distributionally, they come out slightly different? I've tried running the t- and z-tests with the maximum-likelihood standard deviations too, but couldn't get them to tie out...
sd_ML = function(x)
sqrt(mean(x^2)-mean(x)^2)
sd_null = function(x)
sqrt(mean((x-0)^2))
n = 200
x = rnorm(n)
xbar = mean(x)
s = sd_ML(x)
s0 = sd_null(x)
likelihoodsnull = dnorm(x,0,s0)
likelihoodsalternative = dnorm(x,xbar,s)
LR = sum(log(likelihoodsnull))-sum(log(likelihoodsalternative))
p_LR = pchisq(-2*LR,1,lower.tail = F)
p_t = t.test(x)$p.value
p_z = pnorm(-abs(xbar)/ (sd(x)/sqrt(n)) )*2
print(c(LR=p_LR,t=p_t,z=p_z))