Basically, the title of the question is all there is.
quoting from bishop's pattern recognition and machine learning:
In both the Bayesian and frequentist paradigms, the likelihood functions $p(D/w)$ plays a central role. However, the manner in which it is used is fundamentally different in the two approaches. In a frequentist setting, w is considered to be a fixed parameter, whose value is determined by some form of 'estimator', and error bars on this estimate are obtained by considering the distribution of possible data sets $D$. By, contrast, from the Bayesian viewpoint htere is only a single data set $D$ (namely the one that is actually observed), and the uncertainty in the parameters is expressed through a probability distriution over $w$
What does $p(D/w)$ mean if $D$ is fixed?