I am a statistician running into an odd problem, and it feels like I am missing a key point here. I have a true model (generated by me so I know truth), and a PREDICTED model that estimates certain parameters of the true model, and I want to test validity/accuracy of the PREDICTED model.
I do the same exercise with 2 sample sizes -> 200 and 50.
If I use the Likelihood ratio test to test my predicted model's goodness by using n=200, I get a much lower likelihood, than when I use n=50, and so in this structure of the experiment, it shows that the n=200 model (SAME as the n=50 model but for the sample size) is LESS ACCURATE than the n=50 model, simply because the of the likelihoods that are affected by the sample sizes.
What am I missing? Feeling quite silly -
Thanks in advance.