Here are some thoughts about your question:
The classical way to assess the quality of maximum likelihood estimators is indeed to:
- generate $n$ independent and similar in size synthetic data sets from your model (parametrized with the ground truth parameters $p_1,\dots,p_m$);
- compute maximum likelihood estimators for each of these data sets $({p}^{i}_1,\dots,p^{i}_m)_{1\leq i\leq n}$;
- and finally to compute the mean (to check for biases) and the standard deviation (to check for accuracy) of the differences between your estimators and the ground truth values of the parameters.
You can see a nice example of application of this method in Fig.7 of the following paper, in which the authors use the Expectation-Maximization algorithm to infer the parameters of a model of synapse:
https://www.frontiersin.org/articles/10.3389/fnsyn.2019.00022/full
This procedure is useful to study how the precision of your estimator varies with the value of your ground truth parameters, or with the size of your samples : as you mentioned, the result will be a function of the value of the parameters you used to generate your surrogate data.
But if you are looking for a way to quantify $\textit{a priori}$ (i.e. without running $n$ simulations) the expected accuracy of your estimator for a given model and parameters $p_1,\dots,p_m$, then what you are looking for is probably the Cramér-Rao bound (see the wikipedia article on the subject).
The Cramér-Rao bound gives you a lower bound on the variance of an unbiased estimator (a modification of the inequality for biased estimator also exists). The variance of your estimator will always be at least as large as the inverse of the Fisher Information, which is itself a function of the number of data points in your data sets and of the parameters of your model. The Fisher Information quantifies the expected curvature of your likelihood as a function of the parameters (see the properties of the Fisher Information). This precisely measures how much influence a unit change of $p_i$ has on $Y$.
Hope this helps !
-- Re post-processing, I think if one wants to compare the estimation precision of different parameters, one has no choice but to somehow post-process the estimates, for the reasons outlined. Otherwise one cannot make statements like "the estimation precision of parameter $p_1$ was greater than the estimation precision of parameter $p_2$".
– monade Oct 11 '20 at 13:44