Say we compare 99% CI and 95% CI. A 99% confidence level requires more
trust than a 95% confidence, so how can you make the interval more
trustworthy? Make it wider, of course. The 99% CI is wider than the
95% CI.
This "more trustworthy" phrase reminds a college student the previous
chapter in the 101 textbook: something that is good at measuring the
things it intends to measure is said to be accurate. Therefore, the
wider 99% CI should be more accurate than the 95% CI.
Using a wider CI simply means that if you repeat the experiment on the same population with an independent random sampling using the same procedures as before you are more likely to get a result within the range of the CI. It is no more nor less accurate. Confidence intervals do not measure any aspect of accuracy, which is basically the difference between your estimate and the true value. The confusion may arise because we unfortunately often don't know the true value and are doing the experiment to get an estimate of it. But no amount of widening CIs will make up for not knowning the true value. What it does is make it more likely that you will 'succeed' in validating that estimate fits inside your huge limits.
Since there is a trade-off between precision and accuracy, it follows that the >narrow 95% CI must be more precise than the 99% CI.
This is an attempt at logical follow on, but as we saw above the premise was wrong and so the conclusion is inevitably wrong. However, precision and accuracy trade off is true in quantum mechanics it is not the case in any classical science where measurement does not significantly perturb the sample being analysed. In these cases accuracy and precision are orthogonal information and can be optimised independently. So even the follow on logic is flawed.
Furthermore, CIs do not measure precision directly, they measure precision adjusted for sample size. If you always know the sample size you can trace back to get precision, but you can't if you don't have that information to hand.
This makes sense for people who have been spared Statistics 101,
because they have a different understanding of the word "precise". For
them, narrow intervals are precise. But, for the statistics student,
precision is described as the same as repeatability. So all the above
suggests that the measurement / calculation of a 95% CI more
repeatable than that of a 99% CI.
This seems wrong: there should be no difference between the way the
99% CIs and the 95% CIs are distributed, when N is kept constant. Both
types of CI are centered around sample means, which in turn are always
on the normal distribution predicted by central limit theorem.
You are right, this is wrong because of the reasons described above.
Are the calculations of 95% CIs more repeatable than those of 99% CIs?
No, because 95% CIs and 99% CIs are based on the same precision and same N weighted by a difference z factor. The mistake stems from the very source you suspect, a misunderstanding of precision.