In this answer, @whuber outlines conditional probabilities for correlated binary random variables. In some recent exploration, I have been playing with inference for sequences of binary random variables and have discovered that for some values of the coin's bias ($q$) and autocorrelation ($\rho$), these conditional probabilities fail to be valid.
Here is a simple example. The probability of observing a 1 in the next flip given the last flip was also a 1 is
$$ P(1 \vert 1) = q + \rho(1-q) $$
I've gone through the exercise of plotting this probability for various $q$ and $\rho$
For some values of $\rho$ and $q$ the conditional probability is negative, and hence I suspect is for some reason undefined. The simplest example I could think of is as follows: Suppose the coin's bias was $q=0$ and the correlation was $\rho=-1$. Obviously, the correlation would imply the next flip would have to be 1, but since the bias is 0 this is not possible. This is an extreme example where the conditional probability is not defined precicely because $q=0$ in the denominator of Bayes rule, but as we can see the conditional probability is negative even when $q>0$.
I was hoping someone could explain to me why these conditional probabilities are turning out to be negative, and in general what bounds I would need to enforce on the bias and correlation in order for this not to happen.
