A definition that I am aware of is interpreted as follows. Given a fixed probability parameter $\theta$, if I construct $100$ confidence intervals $\{I_1, \cdots, I_{100}\}$, then it is expected that $95$ of confidence intervals contain $\theta$.
However, I came across an alternative interpretation that confuses me:
Suppose we are extracting 100 samples from a group of students in a university, where each sample has a certain number of records. We have calculated the mean age of students from each sample. Now, if we say that the confidence interval is [18, 24] with a 95% probability, then it means that the mean age of 95 out of 100 samples will lie in the range of 18-24.
I asked my friend, who is a stat PhD student, he said the second interpretation is more widely used in practice (unfortunately, I did not have enough time to catch details). However, I am unsure why 1. and 2. would imply the same thing: 1. is about constructing 100-many intervals and observing how many of these intervals contain the true parameter; meanwhile 2. is about fixing one interval, and count how many of 100 samples will have its respective parameter falling within the given interval. Any clarification would be appreciated.