In their original publication, Box and Cox state
...we can obtain an approximate $100(1 - \alpha)$ per cent confidence region [around $\hat{\lambda}$] from $$ L_\text{max} (\hat{\lambda}) - L_\text{max}(\lambda) < \frac{1}{2}\chi^2_{\nu_\lambda}(\alpha)\ , $$ where $\nu_\lambda$ is the number of independent components in $\lambda$.
They unfortunately do not explain their notation: does $\chi^2(\cdot)$ refer to the CDF, the PDF or something else? Further, what exactly is $\nu_\lambda$? Concretely, if I am applying a Box-Cox transform to a time series of length $N$, is $\nu_\lambda= N-1$, for instance?
For reference, the scipy Python package lists the following formula for the confidence interval of their Box-Cox implementation:
$$
L_\text{max} (\hat{\lambda}) - L_\text{max}(\lambda) < \frac{1}{2}\chi^2(1 - \alpha, 1)\ ;
$$
digging into the source code, it appears that the compute the PPF of $1- \alpha$, and always assume $1$ df for the $\chi^2$; is this correct?
Any insight is appreciated.