I've been doing some reading on the topic of credible vs confidence intervals but unfortunately it feels like the more I read the more I'm confused. There seems to be a general sense or consensus that confidence intervals are unintuitive. In particular, a $(1-\alpha)$ confidence interval you've calculated based on your data cannot be interpreted as having probability $(1-\alpha)$ of containing the true value $\theta_0$ of the parameter $\theta$ you're trying to estimate. Instead the correct interpretation of confidence intervals seem to be upon repeated samples of the data from the likelihood ('repeated experiments'), $(1-\alpha)$ of the confidence intervals generated (which will differ every experiment) will contain the true value $\theta_0$.
The Bayesian credible interval is usually provided as a way to resolve this problem, where a $(1-\alpha)$ credible interval can indeed be interpreted as the true value $\theta_0$ having probability $(1-\alpha)$ of lying in the credible interval.
However, among theoretical statisticians there seems to be significant effort and interest in trying to prove mathematically whether a $(1-\alpha)$ posterior credible interval (or region) also have $(1-\alpha)$ frequentist coverage. The most famous theorem on this is perhaps the Bernstein-von Mises theorem. In this report of open problems in Bayesian statistics published in 2011 by the International Society for Bayesian Analysis (https://www.stat.berkeley.edu/~aldous/157/Papers/Bayesian_open_problems.pdf), trying to establish the frequentist coverage of Bayesian credible regions in nonparametric models is repeatedly mentioned.
I also get the gist from talking to some researchers that confidence intervals is the 'correct' way to quantify uncertainty somehow in many if not most scientific studies, so it's important that any credible intervals from Bayesian models in statistical studies should have the same frequentist coverage, but I don't fully understand why.
Any comments on helping me understand the following would be much appreciated:
Why is it important for Bayesian credible intervals to have the same level of frequentist coverage from both a theoretical/mathematical and practical/scientific point of view?
Is the crux that if the parameter $\theta$ you're trying to estimate has a 'true value' in the real world (e.g. average height of a population), that frequentist coverage is the property you're looking for in uncertainty quantification, and if so why? If not, what's the deficiency credible intervals have that confidence intervals do not have, and why is this deficiency important in statistical studies?