2

I am taking a course in logistic regression, and currently my class is about to finish our discussion about simple logistic regression. My professor said that the following statement is correct:

For one unit increase in x, the odds(event) is increased (decreased) by the factor $\exp(β_1)/(1-\exp(β_1))$ when $β_1$ is positive (negative).

I understand all of this except for the $(1-\exp(\beta_1))$ part. Why wouldn't the $\rm odds(event)$ just decrease by $\exp(\beta_1)$?

Where $\rm odds(event) = P(event)/(1-P(event))$ and $x$ is a continuous predictor.

3 Answers3

3

For inference in logistic regression, it would be easier to think in terms of log odds instead of odds. A simple logistic regression model is a generalized linear model with the form $$ \newcommand{\logit}{\rm logit} \newcommand{\odds}{\rm odds} \newcommand{\expit}{\rm expit} \logit(\pi_i) = \beta_0 + \beta_1X_{i1}, $$ where $\logit(\pi_i) = \log(\frac{\pi_i}{1-\pi_i}) = \log(\odds(\pi_i))$. Notice that it has the exact same form as linear regression with a Bernoulli-distributed transformed dependent variable. This has the more intuitive interpretation that for every increase in $X_{i1}$, the expected log odds in favor of $X_{i1}$ increase by $\beta_1$.

If you want to interpret the model in terms of odds, you just have to exponentiate the logit, which is what you initially assumed. This gives you $\odds(\pi_i) = \exp(\beta_0 + \beta_1X_{i1})$, and not $\odds(\pi_i) = \frac{\exp(\beta_0 + \beta_1X_{i1})}{1 + \exp(\beta_0 + \beta_1X_{i1})}$. The interpretation in this case is that for every increase in $X_{i1}$, the expected odds in favor of $Y_i = 1$ increase by $\exp(\beta_1)$. I think either you or your professor got odds and probability mixed up in your professor's explanation.

If you want to draw inference on probabilities, you need to invert the $\logit$ transformation by taking the $\expit$ of both sides, where $\expit(x) = \frac{\exp(x)}{1 + \exp(x)}$. In that case, we have $\pi_i = \frac{\exp(\beta_0 + \beta_1X_{i1})}{1 + \exp(\beta_0 + \beta_1X_{i1})}$ as our expected probability that $Y_i=1$. In this case, it is difficult to draw inference in terms of probabilities, which is why we transform back to probabilities by taking the expit of our expected log odds.

2

If your professor said that, they were wrong. The estimated betas ($\hat\beta_1$) in logistic regression are on the scale of the linear predictor. That is, they are changes in log odds. Exponentiating them (i.e., $\exp(\hat\beta_1)$) converts them from additive changes in log odds to multiplicative changes in odds. In other words, you are right, the odds would decrease (increase) by a factor of $\exp(\hat\beta_1)$. This follows from the definition of logarithms and exponentiation. When the odds are divided by $1+\rm odds$ (not $1-\rm odds$), then you will get a predicted probability.

In addition, your professor's expressions seem to be missing the intercept, $\hat\beta_0$, unless it was suppressed (a bad idea). That is, the predicted probability of 'success' ($Y=1$) at a particular point, $X=x_i$, is $\exp(\beta_0 + \beta_ix_i)/(1+\exp(\beta_0 + \beta_ix_i))$. This seems to be confused on multiple levels; that may be a bad sign.

0

The manner in which I asked this question may have been misleading:

Let B1 = -1 for a dichotomous variable X.

Note that Odds(X=1) = exp(B1)*Odds(X=0)

If B1 < 0 then the odds decreases by the factor 1 - exp(B1)

Ex: Let B1 = -1 then Odds(X=1) = exp(-1)*Odds(X=0) = .367*Odds(X=0)

So, in fact, we have a 1- e^(-1) = 1-.367 = .633 => 63.3% decrease.

This is what I meant. Sorry for any confusion.