4

I am currently calculating reliability estimates for test-retest data.

My question is regarding the difference between standard error of measurement (SEM) versus minimum detectable change (MDC) when seeking to determine if there is a 'real' difference between two measurements.

Here is my thinking thus far:

Each measurement has an error band about it. For two measurements, if error bands overlap then there is no 'real' difference between the measurements.

  1. For example, at 95% confidence, each measurement has an error band of $\pm 1.96 \times SEM$. So, two measurements would need to be more than $2 \times 1.96 \times SEM =3.92 \times SEM$ apart to avoid each measurement's confidence interval overlapping and for their to be a real difference between the two measurements.

  2. Another method for determining if two measurements are 'different' is to use MDC where

$$MDC = 1.96 \times \sqrt{2} \times SEM =2.77 \times SEM$$

[EDIT: for second formula see e.g. p. 238 of Weir, J. P. (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of strength and conditioning research / National Strength & Conditioning Association, 19(1), 231–240. doi:10.1519/15184.1]

If the difference between the two measurements is greater than MDC then there is a real difference between the measurements.

Obviously the two formulas are different and would produce different results. So which formula is correct?

Stephen
  • 41

2 Answers2

3

From my initial readings, there are rather extensive debates (e.g., here, and here) around the measurement of reliable change. However, at the risk of not being fully cognizant of the nuances of such debates, your second approach (MDC) seems reasonable, and your first approach (non-overlapping confidence intervals) does not.

You are presumably trying to rule out the null hypothesis that the change between two measurements is zero where there has been some error in measuring the variable of interest at each time point. In this sense the problem is analogous to an independent groups t-test where the denominator is the $\sqrt{SE^2_a + SE^2_b}$, which simplifies to $\sqrt{2}SE$ where $SE^2_a$ and $SE^2_b$ are equal. This provides a standard error of the measurement of change, which is presumably what you want.

Jeromy Anglim
  • 44,984
0

If you can define what "no real difference" is in practice, an equivalence test will tell you if there is no real difference. The test you are using can't do that.

http://www.cscu.cornell.edu/news/statnews/stnews85.pdf

Tom
  • 1,172