1

I am working on confidence intervals for transformed parameters in dose-response log-logistic model.

For simplicity, let's assume 2 parameter regression model with normal errors, where $\theta=(a,b)$ are the parameters:

$$E[y|x] = \frac{1}{1 + \exp(a(\log x -\log b))}$$

Now, I am interested in finding the confidence interval for $ED_c= g(a,b|c) = \exp{(\frac{c}{a}}-\log b)$, where $c$ is a user-chosen constant.

I know that you can do monotone transformations of confidence intervals for 1-dimensional parameters, e.g., a log transformation of the parameter and both boundaries of the confidence interval.

However, I am not sure if the same can be applied for the transformation I am interested in, since it's a $R^2\rightarrow R$ function and the notion of monotonicity in multiple dimensions kinda vanishes.

Now, I can construct the confidence region (i.e. ellipse) for parameters $a,b$ via LRT test quite easily, let's call this confidence region $C$. My idea is to evaluate my function $g$ on the set $C$ and by finding the minimum and maximum of $g$ on the set $C$ I hope for acquiring the confidence interval for the $ED_c$.

I think it boils down to just implication $\theta \in C \implies g(\theta) \in g(C)$, but I might be missing something.

My intuition is that it should work well, however I struggle to find theoretical justification.

Disclaimer: I know about bootstrapping, Wald type, reparameterization, etc., I am specifically interested in this particular construction.

Golias
  • 11
  • 3
  • 3
    EDc is a scalar parameter that can be seen as a parametrization to the original problem, thus you can build a profile likelihood confidence interval or apply the delta method. – utobi Feb 28 '23 at 13:43
  • 2
    Re "the notion of monotonicity in multiple dimensions kinda vanishes:" yes, that's correct, but it's misleading. For real-valued functions of one variable, monotonicity implies they are one-to-one. That is the concept you want to generalize to transformations of more than one variable. Indeed, if you apply a set-valued procedure $t$ to random data $X$ where $\theta$ is any property of the distribution of $X$ for which $\Pr(\theta\in t(X))\ge1-\alpha,$ and $h$ is a one-to-one function, then the event $\theta\in t(X)$ is identical to the event $h(\theta)\in h(t(X)).$ – whuber Feb 28 '23 at 14:07
  • @whuber yes, thanks for the input, you are right. Does it also hold for just for not one-to-one functions? I believe the key is that my transformation is continuous and surjective and I am only interested in one implication, not an equivalence, i.e., does $\theta \in t(X) \implies h(\theta) \in h(t(X))$ for $h$ continuous and surjective? – Golias Feb 28 '23 at 14:27
  • 1
    The implication is trivial. If we simplify the notation, let $S$ be a set (playing the role of $t(X)$) and $h:S\to T$ any function. Then $\theta\in S$ implies $h(\theta)\in T$ by the very definition of a function. This triviality should give you pause, because it suggests you ought to be interested in more than this property. – whuber Feb 28 '23 at 14:49
  • 1
    Oh, you're right, it is actually trivial. Thank you! – Golias Feb 28 '23 at 14:55

1 Answers1

0

Since you know that your errors are normal, another approach would be to use the delta method to find the limiting distribution of $g(\hat{a}, \hat{b})$ and find a confidence interval for $g(a, b)$ that way.

I'm going to assume that "normal errors" means that $$ \sqrt{n}((\hat{a}, \hat{b}) - (a, b)) \stackrel{\mathrm{d}}{\rightarrow} \mathcal{N}(0, \Sigma) $$ for some covariance matrix $\Sigma$ that you can estimate. Indeed, this is the case for logistic regression generally.

Now, the delta method implies that $$ \sqrt{n}(g(\hat{a}, \hat{b}) - g(a, b)) \stackrel{\mathrm{d}}{\rightarrow} \mathcal{N}(0, \nabla g(a, b)^{\mathrm{T}} \Sigma \nabla g(a, b)), $$ where in the case of this problem, $$ \nabla g(a, b) = \begin{pmatrix} -c/a^2 \\ -1/b \end{pmatrix} g(a, b), $$ yielding a standard error of $$ \mathrm{se}(g(\hat{a}, \hat{b})) = g(\hat{a}, \hat{b}) \sqrt{(c/\hat{a}^2, 1/\hat{b}) \hat{\Sigma} \begin{pmatrix} c/\hat{a}^2 \\ 1/\hat{b} \end{pmatrix}}, $$ and so a 95% confidence interval of $$ \mathrm{CI}_{0.95} = g(\hat{a}, \hat{b}) \biggl[1 \pm 1.96 \sqrt{(c/\hat{a}^2, 1/\hat{b}) \hat{\Sigma} \begin{pmatrix} c/\hat{a}^2 \\ 1/\hat{b} \end{pmatrix}} \biggr] $$