I am currently calculating reliability estimates for test-retest data.
My question is regarding the difference between standard error of measurement (SEM) versus minimum detectable change (MDC) when seeking to determine if there is a 'real' difference between two measurements.
Here is my thinking thus far:
Each measurement has an error band about it. For two measurements, if error bands overlap then there is no 'real' difference between the measurements.
For example, at 95% confidence, each measurement has an error band of $\pm 1.96 \times SEM$. So, two measurements would need to be more than $2 \times 1.96 \times SEM =3.92 \times SEM$ apart to avoid each measurement's confidence interval overlapping and for their to be a real difference between the two measurements.
Another method for determining if two measurements are 'different' is to use MDC where
$$MDC = 1.96 \times \sqrt{2} \times SEM =2.77 \times SEM$$
[EDIT: for second formula see e.g. p. 238 of Weir, J. P. (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of strength and conditioning research / National Strength & Conditioning Association, 19(1), 231–240. doi:10.1519/15184.1]
If the difference between the two measurements is greater than MDC then there is a real difference between the measurements.
Obviously the two formulas are different and would produce different results. So which formula is correct?