Please read the problem till the end. It may appear first that this problem was answered in earlier posts, but it is not so. I have read all the related posts.
Problem: Suppose I have two data sets (for two treatments), G and A. I run two logistic regressions for G and A: \begin{eqnarray*} \log \left[ \frac{\Pr (R)}{1-\Pr (R)}\right] _{G} &=&\beta _{0G}+\beta _{1G}X+\beta _{2G}Y \\ \log \left[ \frac{\Pr (R)}{1-\Pr (R)}\right] _{A} &=&\beta _{0A}+\beta _{1A}X+\beta _{2A}Y. \end{eqnarray*}
Based on the estimates of logistic regressions, I have two lines: \begin{eqnarray*} x_{G}^{\ast } &=&-\frac{\hat{\beta}_{0G}}{\hat{\beta}_{1G}}-\frac{\hat{\beta}% _{2G}}{\hat{\beta}_{1G}}Y \\ x_{A}^{\ast } &=&-\frac{\hat{\beta}_{0A}}{\hat{\beta}_{1A}}-\frac{\hat{\beta}% _{2A}}{\hat{\beta}_{1A}}Y. \end{eqnarray*}
QUESTION: How do I test that $|\frac{\hat{\beta}_{2G}}{\hat{\beta}_{1G}}|>|% \frac{\hat{\beta}_{2A}}{\hat{\beta}_{1A}}|$, i.e., slope of $x_{G}^{\ast }$ is greater than the slope of $x_{A}^{\ast }?$
Progress so far (Jan 26, 2016): I came across a document, "Ratios: A short guide to confidence limits and proper use" by Franz (2007), which mentions methods such as Fieller, Taylor (or Delta), Bootstrap and Regression. However, all these methods are based on say, $\rho =\frac{E[Z]}{E[W]}$, where $Z$ and $W$ are random variables, and a test statistic is derived from the sample of $N$ paired measurements $(z_{i},w_{i})$, with $i=1,2,...,N$. Applied to my problem, $Z=\hat{\beta}_{2}$, and $W=\hat{\beta}_{1},$ where $% \hat{\beta}_{1}\sim N(\beta _{1},s.e.(\hat{\beta}_{1})),\hat{\beta}_{2}\sim N(\beta _{2},s.e.(\hat{\beta}_{2}))$ (asymptotically; I have large number of data points). However, I don't have paired measurements such as $\left( \hat{% \beta}_{11},\hat{\beta}_{21}\right) ,...,\left( \hat{\beta}_{1N},\hat{\beta}% _{2N}\right) .$ I am kind of stuck here. Will appreciate any help.