Given an existing regression curve, how do I properly account for the known variance of the dependent variable when back-calculating for the (nominally) independent variable? If I had an observation $Y_{new}=1.09$ with a variance $\sigma_Y^2=0.012$ then how should I incorporate that information into my final answer for $X_{new}$ ?
I can build a (very simple) regression model with two vectors. Using R notation:
x <- c(8, 10, 50, 200, 350, 500, 1000, 2000)
y <- c(0.012, 0.016, 0.078, 0.333, 0.583, 0.799, 1.643, 3.002)
simple.lm <- lm(y ~ x,data=data.frame(x,y))
And back-calculation of any new value of X given a value of Y can be found by inverting the linear equation. Easy enough.
A <- coef(simple.lm)[1]
B <- coef(simple.lm)[2]
predict_X <- (y - A) / B
Y_new <- 1.09
X_new <- (Y_new - A) / B
But that doesn't deliver the prediction interval for the $X_{new}$ from that regression curve. It also assumes I either don't know, or don't care to include, the variability attached to my observation of Y. When I have $\sigma_Y^2$, I would like to carry that forward into the reported variance of X, from which I can calculate the prediction interval for X. This is basically the inverse of my prior question but I wanted to lay it out, and answer it, because this is the problem more likely to be searched for.