0

Scenario: (really short version of study) I am testing whether vitamin d supplement helps with depression.

First, I have two groups (all proper randomness applied) Random assignment has occurred:

Group A (treatment group) , fills out depression questionnaire (Likert scale), takes vitamin d gummies, after 6 months, takes depression questionnaire again (same one)

Significance test: matched pair t-test on Group A for significance - result from questionnaire would be the difference of means (post-pre) - hopes significant

Group B (control group), fill out depression questionnaire, takes placebo gummies, after 6 months, takes depression questionnaire again

Significance test: matched pair t-test on Group B for significance - result from questionnaire would be the difference of means (post-pre) - hopes not significant

Then perform a t - test between Group A and B for significance.

My question: aside of finding the difference of means for group A and group B for my overall t -test, do I need to do an individual significance test for each group or is that the same thing as my t -test?

dipetkov
  • 9,805
  • If your hypothesis is that the change in score will be different between groups A and B then your first two tests are irrelevant. – mdewey Dec 10 '23 at 16:18
  • There are lots of questions on Cross Validated on the topic of pre-post designs. A good place to start might be Best practice when analysing pre-post treatment-control designs. Also take a look at the [tag:pre-post-comparison] tag. (Note: I edited the question to add this tag). – dipetkov Dec 10 '23 at 19:22
  • Given that you have a paired design, you can consider the change in survey scores as the datum for each participant. Then you have one set of data for the treatment group and one for the control group that you can then compare. That is a much more reliable type of analysis than the difference in significance arrangement that you propose. – Michael Lew Dec 10 '23 at 19:46

0 Answers0