Let's say we want to calculate the standard error for a statistic that proportion of heads per 1000 coin flips.
So let's say we flip a coin 200 times. We see heads 50 times.
$\hat{\mu}$, our estimate for the mean of the statistic, is 0.25 * 1000 = 250.
How do we calculate our estimate for the variance or the standard error?
Do we use n = 200 or do we use n = 200/1000 since our variance is proportional to flips / 1000? Do we need to transform every observation to be divided by 1000?
If we use $E[X/1000 - \hat{\mu}/1000]$ then the variance is tiny. So that doesn't make sense.
In general, I'm confused on this point.
Update
@Glen_b My thought was:
We assume the variance of each trial is $\hat{p}(1-\hat{p})$, or over all trails is $n\hat{p}\hat{q}$.
We want to sum the variances of the trails and adjust for our rate.
So $Var(\sum_i^n {X_i}) = \sum_{i=1}^n n\hat{p}(1-\hat{p})/rate$ (rate=1000).
We seek $Var(\sum_{i=1}^n {{X_i}\over{{n}\over{1000}}})$ We can then, by propagation, pull out the ${{1000 \cdot 1000}\over{n^2}}$.
${{1000 \cdot 1000}\over{n^2}}Var(\sum_{i=1}^n {X_i})$
Plugging what we showed three lines earlier and canceling $n$ and $1000$:
${{1000}\over{n}}\sum_{i=1}^n \hat{p}(1-\hat{p})$
Our SE is thus:
$\sqrt{{{1000}\over{n}}\sum_{i=1}^n \hat{p}(1-\hat{p})}$