Consider the nested logistic regression model’s following interpretation.
A person repeatedly chooses between two different options. These options feature different levels of the same features (e.g. car characteristics). In some cases, the person will choose option 1, in some cases the person will choose option 2.
The standard formula of the logistic regression
$$P(Y=1|X=x_{i}) = \frac{e^{\beta x_{i}}}{1-e^{\beta x_{i}}}$$
could be interpreted such that the person evaluates options according to an utility function $U(x_{i})$, assigning utility components $\beta$ to the characteristics of the options, and choosing the option which maximizes utility. In a logistic regression we would thus estimate the implicit values of the characteristics $x_{i}$.
Now suppose the setting would be such that the person does not make a discrete choice. The person could, for example, say that both options are equally attractive, that option 1 is much more attractive than option 2, or that option 2 is just slightly more attractive than option 1 etc.
We would thus compute a logit specification model with the dependant variable not being 0 or 1, but within the interval $(0,1)$.
Is it mathematically possible to do that? I believe it is.* If you take a look at the ML-process of calculating a logistic regression, it is nowhere required for y to be dichotomous, nor is it in fact required to be in the interval of $(0,1)$. You can fit regression models with the ML-method for any objective functions you desire, question is if it will make sense or not. It won’t exactly be logistic regression anymore, but it might be something different, similar.
So the question remains: Does that make any sense? Is it valid to think of a binary choice problem the way I portrayed it, and to make use of additional information on relative weighting of preferences (if available) by assuming y \e (0,1)? Would that make better use of available information, as in the case of y ~ 0.5 we know that the valuation of the options is similar?