The Wilson score interval is for binomial proportions, which you don't have here -- you essentially have a continuous proportion, which - while it will share the property that the variance will tend to be smaller if the proportion is near 0 or 1 - won't typically have a variance of the same form as the binomial.
You will expect to have non-constant variance across different proportions, so it's also unlikely that you can reasonably argue for some constant-variance assumption. [For variables constrained to lie in $[0,1]$ with mean $p$, the variance cannot exceed $p(1-p)$ which in turn $\leq \min(p,1-p)$, so as the mean approaches either 0 or 1, the variance must be small -- smaller than the gap between the mean and the nearest boundary -- whereas in the middle region it may be relatively large.]
Unless the data occur naturally in groups of similar proportion (so you could measure variance within groups you might treat as homogeneous), you'll likely have to make some sort of model assumption, such as a beta model, or you'll also need to model how the variance relates to the mean (it sounds like you might have enough data to do that).
If the variance of the proportions is approximately proportional* to $p(1-p)$, a beta model might be suitable.
* for a beta random variable, the constant of proportionality being $\frac{1}{\alpha+\beta+1}$
Once you have an estimated variance for each proportion, if you also assume either that the proportions are independent or are able to assume/identify some model for the correlation structure between them, you can compute the standard error for the average, at which point a normal approximation for the average proportion should work very nicely and a confidence interval would be easily constructed.
If you have potential covariates, you would likely be best off modelling both the mean and the variance, perhaps via beta regression.
--
An alternative (though related) possibility is to consider transformation to some scale where the variance is more nearly constant, and then using that as a way of getting an estimated variance for each proportion so that an appropriate (if approximate) variance of the average of the proportions could be calculated. I'm not sure this gains us anything over modelling the variance more directly, but some people may find it conceptually easier.
Note that I am not talking about computing an average on the transformed scale and trying to make an interval for the average on the original scale from that*, but simply getting an approximate variance model for the original proportions out of being able to say "with this transformation, the variance is seen to be nearly constant" and backing the proportion variances out of the estimate of constant transformed-scale variance and the transformation (via Taylor series approximation, say).
*(I guess you might try to do something like that, but it would need a correction for bias)
To that end, this reference may be helpful:
Rocke, David M. (1993),
"On the Beta Transformation Family,"
Technometrics, 35:1 (Feb), pp. 72-81