0

Suppose that people, indexed by $j$, must choose between options $i$ according to

$$ max_i ~~(a_i +\epsilon_{ij})$$

Where $\epsilon_{ij}$ is an Extreme value Type I random variable (error term), and $a_i$ is a value between 0 and 1.

Now, define $S_i \in \{0,1\}$ as an indicator that option $i$ is chosen at least by one person, and $\pi_i =P(S=1)$ as the probability of this event.

If $j$ and $i$ are both sufficiently large, is $\pi_i=a_i$ and hence the probability recovers the latent value?

JorgeG
  • 211
  • 2
  • 7
  • 1
    Binary case here if you set $x=1$. Similar derivation are possible for multinomial models. – dimitriy Feb 27 '18 at 23:32
  • Thanks! This helps make a lot of progress in a binary case. As I understand it then, in the binary case, I can recover $a_i$ up to scale, such that $c \times a_i=invlogit(\pi_i)$ for some scale parameter $c$. Any idea what would be the scale parameter or how to think about it? – JorgeG Feb 28 '18 at 18:51
  • The scale parameter is the standard deviation of the difference in the errors. Usually these models are cast in terms of utility, so this normalization does not matter too much (as long as the variance does not vary across people). – dimitriy Feb 28 '18 at 18:59

0 Answers0