We usually use the exact same symbol to denote the obtained estimate of a parameter (a number) and the estimator we used, which is a random variable (a function). To distinguish, I will use the following notation:
True values of unknown paramters : $\alpha,\beta$
Obtained estimates from a specific sample: $\hat \alpha, \hat \beta$
Estimators used: $a, b$.
We are interested in the variance (and then the standard error), of a function of the estimators, $h[a, b]$. We do indeed say "standard error of the estimate" but this strictly speaking is wrong: estimates are fixed numbers, they do not have a variance or a standard deviation.
We can approximate $h[a, b]$ by a first-order Taylor expansion around the obtained estimates:
$$h[a, b] \approx h[\hat \alpha, \hat \beta]\; + \;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot (a - \hat \alpha)\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot (b - \hat \beta)$$
Rearranging,
$$h[a, b] \approx \Big[ h[\hat \alpha, \hat \beta]\; - \;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot \hat \alpha\;-\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot \hat \beta\Big]$$
$$+\;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot a\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot b$$
Why the re-arrangement? Because, the terms in the big brackets are all fixed numbers. And fixed numbers do not have a variance, and, when they enter additively, they do not affect the variance of the terms that they do. So
$${\rm Var} \left(h[a, b]\right) \approx {\rm Var} \left(\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot a\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot b\right)$$
In our case
$$h[a,b] = \frac {b}{1-a} \implies \frac {\partial h[a, b]}{\partial a} = \frac {b}{(1-a)^2} \implies \frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}} = \frac {\hat \beta}{(1-\hat \alpha)^2}$$
and
$$\frac {\partial h[a, b]}{\partial b} = \frac {1}{(1-a)} \implies \frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}} = \frac {1}{(1-\hat \alpha)}$$
Substituting, and using the standard formula for the variance of the sum of two random variables,
$${\rm Var} \left(\frac {b}{1-a}\right) \approx \left(\frac {\hat \beta}{(1-\hat \alpha)^2}\right)^2\cdot {\rm Var}(a)\;+\;\left(\frac {1}{(1-\hat \alpha)}\right)^2\cdot {\rm Var}(b) \\+\; 2\frac {\hat \beta}{(1-\hat \alpha)^2}\frac {1}{(1-\hat \alpha)}{\rm Cov}(a,b)$$
or a bit more compactly
$${\rm Var} \left(\frac {b}{1-a}\right) \approx \frac {\hat \beta^2{\rm Var}(a)}{(1-\hat \alpha)^4}\;+\;\frac {{\rm Var}(b)}{(1-\hat \alpha)^2} \;+\; \frac {2\hat \beta{\rm Cov}(a,b)}{(1-\hat \alpha)^3}$$
The variances and the covariance in the right hand side are unknown. But you have an estimate of them: the "standard errors" -squared-, and the covariance from the estimated covariance matrix obtained from the model. You plug these estimates into the last expression, together with the coefficient estimates themselves, and then you take the square root of the whole to arrive at an estimate for the magnitude you are interested in. Note the more than one source of approximation error here.