Let us assume that a logistic regression model has been fitted to some training data and that there is new test data in which $n$ predictor combinations are identical, say $\vec{x}$. The probability for a certain proportion $\hat{p}$ of positive responses among these $n$ data points can then be computed with the binomial distribution based on $n$ and $p=P(Y=1|\vec{x})$ as predicted by the model, i.e. $$P(\hat{p}=k/n) = {n\choose k}p^k(1-p)^{n-k}$$ from which prediction intervals can be built.
The probability $p$ predicted by the model, however, comes with some uncertainty. As I understand it, the approximate distribution of $p$ is the distribution of a logistic transform of a normal distribution with the standard error reported by the model, e.g. in R by glm.predict(..., se.fit=TRUE)$se.fit. This distribution is complicated (?) but should in principle be computable with the transformation theorem of probability densities.
Can (or should?) this uncertainty be incorporated into the probability for obtaining a specific value of $\hat{p}$, or for constructing prediction intervals for $\hat{p}$? If yes: how?