1

In their original publication, Box and Cox state

...we can obtain an approximate $100(1 - \alpha)$ per cent confidence region [around $\hat{\lambda}$] from $$ L_\text{max} (\hat{\lambda}) - L_\text{max}(\lambda) < \frac{1}{2}\chi^2_{\nu_\lambda}(\alpha)\ , $$ where $\nu_\lambda$ is the number of independent components in $\lambda$.

They unfortunately do not explain their notation: does $\chi^2(\cdot)$ refer to the CDF, the PDF or something else? Further, what exactly is $\nu_\lambda$? Concretely, if I am applying a Box-Cox transform to a time series of length $N$, is $\nu_\lambda= N-1$, for instance?

For reference, the scipy Python package lists the following formula for the confidence interval of their Box-Cox implementation: $$ L_\text{max} (\hat{\lambda}) - L_\text{max}(\lambda) < \frac{1}{2}\chi^2(1 - \alpha, 1)\ ; $$ digging into the source code, it appears that the compute the PPF of $1- \alpha$, and always assume $1$ df for the $\chi^2$; is this correct?

Any insight is appreciated.

Anthony
  • 500
  • 1
    This is standard shorthand. The context implies they refer to an appropriate quantile of the chi-squared distribution. Reviewing the theory of hypothesis testing, the meaning of p values, the Neyman-Pearson lemma, and the Likelihood Ratio test can be helpful. They will show you that $\nu_\lambda$ is not the size of your dataset, but rather it's the number of separate parameters used in estimating $\lambda.$ – whuber Apr 14 '23 at 21:38
  • Thanks for this. I would accept this as an answer if you care to post it as such. – Anthony Apr 15 '23 at 15:16

0 Answers0