This is closely related to the concept of identifiability in mathematical statistics, which I think relates to your question. Identifiability deals with the possibility that a poorly-specified model may lack a one-to-one relationship between parameter sets and probability distributions over the data, and this causes problems with regard to inference.
For instance, take the "over-parameterized" ANOVA model,
$$
Y_{ij} = \mu + \alpha_i + \epsilon_{ij} ,
$$
where $1 \leq i \leq k$, $1 \leq j \leq n$, $\epsilon_{ij} \sim$ normal$(0, \sigma^2)$, and no restrictions are placed on$\{ \alpha_i \}_{i=1}^{k}$. Now suppose we were told by an oracle the exact distribution of $Y_{ij}$ within each group, so that we know both its mean and variance for every $i$. (This is in fact the maximum we could ever hope to learn from the data.) Can we recover the model parameters? We cannot, because there's an infinite number of ways we could specify $\mu, \alpha_1, \ldots , \alpha_k$ so that $\text{E}(Y_{ij}) = \mu + \alpha_i$ for each $i$. This would show up in the likelihood function as well, where different parameters sets would give exactly the same likelihood for all possible configurations of the data. The model is not identifiable, and we can't obtain even consistent estimates for any of the mean parameters. For this reason one usually imposes the identifiability constraint $\sum_{i=1}^{k} \alpha_i = 0$.
So while it's important that the parameters of the model specify the distributions involved, it's also important that we be able to go in the other direction and infer parameters from distributions, else we could never uncover the "true" model.