I think I figured out the answer myself after doing a bit of reading so thought of posting it here. It looks like I got little confused.
So as per my post
$$O = \frac{P(X)}{1-P(X)}.$$
So I forgot to take into account the fact that $P(X)$ itself is the probability given by the logistic function:-
$$P_\beta(X) = \frac{e^{\beta^TX}}{1 + e^{\beta^TX} }.$$
So replacing this in in the equation for $O,$ we get
$$O = \frac{\frac{e^{\beta^TX}}{1 + e^{\beta^TX} }}{1-\frac{e^{\beta^TX}}{1 + e^{\beta^TX} }} = e^{\beta^TX}.$$
So $e^{\beta^TX}$ is nothing but the odds for the input feature vector $X$ to be of a positive class. And with further algebraic manipulation, we can obtain a linear form and the reason for doing this is to be able to interpret the coefficient vector $\beta$ in precise manner. So that algebraic manipulation is basically taking a natural log of the latest form of $O $ ($e^{\beta^TX}$)
i.e.
$$\ln(O) = \ln \left(e^{\beta^TX}\right) =\beta^TX $$
So the expanded form of $\beta^TX$ is:-
$$\ln(O) = \beta_0+\beta_1x_1+\beta_2x_2+\cdots+\beta_nx_n$$
So the real use of this, as I have understood it, is to be able to interpret the coefficients easily while keeping the linear form just like in multiple linear regression. So looking at the latest expanded form of $\ln(O)$ we can say that a unit increase in $x_i$ causes the log of Odds to increase by $\beta_i.$