I have a long series of entities:
x1, x2, x3, ... xn
for each of which, there is a probability of an event occurring. The probability for each x may be different, but they are all independent, and they are all known:
p1, p2, p3, ... pn
I might provide a point-estimate for how many of said event occurred in that series by simply summing the probabilities p, but is it possible to also calculate a confidence interval or some information about the error in a close-form fashion?
The resistance to using some sampling method is that the series of x many be prohibitively long, and there may be many such series upon which the same estimates need be delivered.
EDIT - Refactoring of the question
For a toy example of the problem, imagine I have a large batch of n widgets, each of which has a known probability of failing within the year. Each widget may have a different known probability of failing, some high, some low (say they belong to different widget classes, those classes well-studied in the past). I'm tasked with estimating the number of widgets that will fail at the end of the year for that batch.
I believe I can provide a point estimate by simply summing the probabilities of each widget in the batch. Stakeholders also request a confidence interval, which may not be possible, but I should be able to provide a sense of error/uncertainty/variance around that point estimate.
My understanding is that I can treat my widgets failing/not-failing as a series of Bernoulli trials, and that likely the "number that fail" can be modelled as a Poisson Binomial Distribution, but from there I get stuck.