I calculated Cronbach's α to estimate reliability using both pre-test and post-test scores before and after a training. I fully expected the pre-test reliability estimates to be low, but what could cause the post-test reliability coefficients to be even lower?
1 Answers
1. How low is your Cronbach's alpha really?
Although it is commonly-accepted that .70 threshold is low bound of reliability (Nunnally & Bernstein, 1994), you may get away with Cronbach's alpha as low as .60: the view supported by some older but reputable applied statisticians such as Widaman (1993).
2. How many items does your measure have?
Usually, you would find that Cronbach's alpha tends to be on a lower side with a smaller number of items (unless their average inter-item correlation is very high). For example, for many psychological instruments with only a few items, it is not uncommon to have very low levels of Chronbach's alpha reliability. Many examples can be found on the International Personality Item Pool website featuring hundreds of personality scales. One specific example is the HPI Math Ability scale which has 6 items and Cronbach's Alpha of only .44.
3. What's the average inter-item correlation among your items?
Since Cronbach's alpha may change depending on the average inter-item correlation (Cortina, 1993), one venue of adjustment you may pursue is to inspect your inter-item correlation matrix. If you find, for example, that one of your items has a low correlation with the rest of the items in the measure, simply try to recalculate Cronbach's alpha without that item. In all likelihood, that item(s) might be partially responsible for very low levels of reliability.
4. Examine score variance of your measure
Score variance may also affect the reliability of your measure. Note that as variance in the measurement of your construct increases, so that the reliability of scores (Ponterotto & Ruckdeschel, 2007). More precisely, Nunally (1978) demonstrated that the magnitude of the reliability is associated with the standard deviation (SD) of observed scores of your measure. Therefore, the larger the standard deviation, the larger the Cronbach's alpha for the sample.
5. Inspect original psychometric properties of your scale
Another possibility is that your original measure has poor psychometric properties and questionable validity. In this case, you might expect lower levels of reliability, along with other poor metrics for assessing the psychometric validity of your scale (e.g. Confirmatory Factor Analysis model fit; low factor loadings; low average variance extracted).
References
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98–104.
Nunnally, J. (1978). Psychometric methods. New York, NY: McGraw–Hill.
Nunnally, J. C., & Bernstein, I. H. (1994a). The assessment of reliability. Psychometric Theory, 3(1), 248–292.
Ponterotto, J. G., & Ruckdeschel, D. E. (2007). An overview of coefficient alpha and a reliability matrix for estimating adequacy of internal consistency coefficients with psychological research measures. Perceptual and Motor Skills, 105(3), 997–1014.
Widaman, K. F. (1993). Common factor analysis versus principal component analysis: differential bias in representing model parameters? Multivariate Behavioural Research, 28(3), 263–311.
- 2,282