In introductory books one can see such definition of sample proportion: if $X = (x_1,...,x_n)$ is our sample of length $n$, consists of $0$ and $1$, then sample proportion is $\hat{p} = \frac{\sum_{k=1}^{n}x_k}{n}$.
We define our sample $\xi=(\xi_1,...,\xi_n)$ as a sample where each random variable has Bernoulli distribution with unknown parameter $0 < p < 1$. So the sample proportion in this case by definition is just sample mean, $\frac{\sum_{k=1}^{n}\xi_k}{n}$.
$\mathbb{E}[\xi_1] = p$, $\mathbb{V}ar[\xi_1] = p-p^2$.
I want to understand the formal derivation for confidence interval of this statistics. But as we know from central limit theorem, $\frac{\xi_1+...+\xi_n}{n}\xrightarrow{d} \mathcal{N}(p, \frac{p-p^2}{n})$ and we can get confidence intervals for $p$ from this.
In my other question I was thoroughly answered about this and got why one cannot use such notation and what do people mean when they say that it's "approximately" normal.
So only one question leaves here:
Is "sample proportion" just the synonym for the "sample mean" but in the case when our sample came from Bernoulli distribution?
To be more clear, we say that $\overline{\xi}= \frac{\sum_{k=1}^{n}\xi_k}{n}$ is a sample proportion iff $\xi = (\xi_1, ...,\xi_n) : \forall 1 \leq i \leq n \ \ \xi_i \sim Bern(p)$.
I just didn't understand, why, for example John A. Rice in his "Mathematical Statistics and Data Analysis, Third Edition" on page 214 introduces both sample mean and sample proportion and doesn't say that sample proportion is just a sample mean in a particular case.