In the Bayesian setting, we update prior distribution $\pi(\theta)$ to posterior distribution $\pi(\theta | x)$ given data $x$. So data $x$ provides an operator $T_x$ on the set of distributions on the parameter space.
$$T_x: Dist(\Theta) \to Dist(\Theta)$$
In many cases, people argue on which prior distribution to take. One popular option is the Jeffreys prior (essentially the Fisher information) because it is invariant under reparametrization.
As what actually matters is the posterior distribution, I wonder why don't we take the fixed points of $T_x$, i.e. $\pi_x(\theta)$ such that $$\pi_x(\theta | x) = \pi_x(\theta)$$ as the posterior once $x$ is observed? A heuristic way to construct such $\pi_x$ is to take $\lim_{n \to \infty}T^n_x(\pi)$ for any $\pi$ if exists.
Questions
- Do people take such fixed points as posterior distribution, and make inferences from that?
- Do such limit always exist? If not, is there always some $\pi$ such that the limit exists?