I have come across this conditional expansion a few times, and I can't seem to make sense of it.
$$p(z|y) = \int{p(z|f)p(f|y)df}$$
I would go about it like this:
\begin{align} \require{cancel} p(z|y) & = \frac{p(z,y)}{p(y)} \\ & = \frac{\int{p(z,f,y)df}}{p(y)} \\ & = \frac{\int{\cancel{p(y)}p(f|y)p(z|f,y)df}}{\cancel{p(y)}} \\ & = \int{p(z|f,\color{red}{y})p(f|y)df} \neq \int{p(z|f)p(f|y)df} \end{align}
How is it that during the expansion we can drop the conditional on y from $p(z|f,y)$? I've seen it in a lot of papers on variational inference, and it's on the wikipedia page for Bayesian Inference: $\hspace{1em} p(\tilde{x}|X,\alpha) = \int{p(\tilde{x}|\theta)p(\theta|X,\alpha)d\theta}.\hspace{1em}$ Why isn't the first item in the integral $p(\tilde{x}|X,\alpha,\theta)$? I feel like I am missing some fundamental thing about conditioning which allows shuffling the conditionals around like this.
Similar to this question, the conditioned on variable is present in all subsequent factors, why doesn't that happen in the above cases?