2

I am simulating a real-world process with variation in it. I measure X, the number of calls served in time period [0,T], and I make "n" pseudo-independent observations of X. In the real-world, X was observed to be X=100, over the same time period. I only have this one observation. How can I statistically compare the simulated results with the single real-word value? I want to validate the simulation model, meaning I want to determine if the model results differ significantly from the real-world value. Do I take the mean of the "n" observations and do a test on these?

James
  • 21
  • 1

1 Answers1

0

I'm not sure you can establish statistical significance with a single real-world datapoint.

Ordinarily if you have two groups of samples and want to know whether they come from the same distribution (so in this case you can use the simulations to say something intelligent about the real world), you carry out a hypothesis test, like a chi-squared test or a student's t-test.

  • I followed the link to the hypothesis tests provided in the previous answer, and the one-sided hypothesis test with the t-distribution will work. – James Oct 02 '18 at 11:21
  • It is indeed possible to conduct a (useful!) significance test with a single measurement. For instance, a single observation of a Poisson variable can test most hypotheses about the Poisson parameter. In the present case the comparison is actually between $N$ values and another "future" value, giving $N+1$ values to work with, not just one. That's a huge difference. A prediction interval is a standard way to make such a comparison. Although there are ways to frame the problem that don't seem to construct a prediction interval, they are necessarily equivalent to it. – whuber Oct 02 '18 at 13:38