consider the measurements: $x_1, x_2,x_3$.
Each is the average value of some sample $x_{s1}$ of the physical quantity $x$ that is measured on a nodal point $i$.
$x_1 = \frac{1}{N_1} \sum_{i=1}^{N_1}x_{s1}[i]$
Let the "error" in $x_1$ be the standard deviation $\sigma_1$ of the quantity $x_1$.
$x_1 \pm \sigma_1 \quad \sigma_1^2 = \frac{1}{N_1-1}\sum_{i=1}^{N_1}(x_1 -x_{s1}[i])^2$
Let us now compute the average of the measurements:
$x_{avg} = (x_1 + x_2 + x_3)/3$
However, what is the error $\sigma_{avg}$ such that reporting $x_{avg} \pm \sigma_{avg}$ makes sense?
I know that that when adding/subtracting numbers that you can add the associated error like
$\delta x_{avg}^2 = \delta x_1^2 +\delta x_2^2 +\delta x_3^2$
However, when N becomes large (i.e. $N \neq 3$), the number $\delta x_{avg}$ can grow so large that it is no longer a reasonable estimate of the error in the quantity $x_{avg}$.