I am aware that there are some useful threads already on free software for calculating sample sizes (see here). However I couldn't find anything specific to cross-classified MLM and repeated measures.
I do psychophysiological research, that is, I look at over-time changes in physiological variables to infer psychological processes. The most common design in my experiments is something like this (I use lme4):
PhysioVar ~ Condition * Time + (1|Participant) + (1|Stimuli)
Reviewers in journals always ask for an a priori power analysis to justify sample size in this type of experiments. I always end up getting away without doing it arguing that it's complicated, that the normal tools typically employed by researchers (e.g. Gpower) are actually not well suited for this complex of a model, where the time variable can have more than 100 levels (e.g. a video that is more than 100 seconds).
It should be noted that physio variables don't typically show huge effect sizes because the main purpose of the peripheral nervous system (e.g., those processes that determine heart rate, skin conductance...) is not to tell us what we are thinking but to keep homeostasis and keep the body alive. Having 100 seconds of a physio measure shouldn't be considered as equally powerful as having 100 different stimuli or participants.
Resources I already know:
- I'm familiar with Westfall and Judd's online power analysis for MLM. It's great because it doesn't require programming skills but it's quite limited because 1. it doesn't include repeated measures 2. it seems to work only if the variable of interest is also included as random slope, and we don't always include that, that's quite the conservative analysis.
- I'm also familiar with Blair et al.'s DeclareDesign online tool but again it does not account for repeated measures.
Does anyone know what's the best way to go about this?