One useful way to rewrite the variance instead of square deviations from the mean is in terms of squared deviations from every other observation (via Wikipedia):
$${\rm Var}(X) = \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{1}{2}(x_i - x_j)^2$$
You can imagine making a figure where $x_i$ are the rows and $x_j$ are the columns (where $i$ and $j$ both index the same observations in the same order), and the matrix is filled with the values of $\frac{1}{2}(x_i - x_j)^2$. You should not be able to shuffle the order of the observations to provide patterns in this matrix in the case of independent data.
When we are interested in auto-correlation of the series we need to define pairs of data in which to examine whether the variance within the pairs contributes a greater/lesser amount to the overall variance. That is pairs within this grouping are more similar than pairs outside of this groupings (for positive auto-correlation).
Some examples of this are:
- The intraclass correlation for grouped data (which in simple cases is available from the output of an ANOVA table).
- For time series or spatial data the auto-correlation of points near by in time and/or space.
For the suggested matrices above, if you have grouped data with positive auto-correlation and you placed the groups in order, the matrix would appear block like, with smaller values within the blocks are larger values between the blocks. For time series data if you order the observations by time, the matrix would appear diagonal, with smaller values on the diagonal and larger values off the diagonal for positive auto-correlation.