1

I have a dataset that includes a variable called SoundDirection. The variable contains 6 levels, each one scored in a likert scale from 1 to 5.

In the experiment, every participant had to listen to 40 stimuli. Every stimulus was presenting a different type of SoundDirection. For each stimulus, the participant had to score a value from 1 to 5.

So, for each stimulus, the database contains the following information:

SoundDirection = [one number in betweeen 0 and 5] // type of stimulus presented to the listener
Score = [one number between 1 and 5] // perceived strength of the presented stimulus

For each participant, 40 stimuli are presented obtaining the following database (as an example)

Participant(n) = [[SoundDirection, Score], ...]

Participant(0) = [ [0,4] [4,2] [1,4] [5,3] [2,3] ... ]

Participant(1) = [ [3,4] [1,4] [2,1] [2,2] [5,1] ... ]

...

The 6 levels of the variable SoundDirection can also be grouped in a variable called SoundTypology, with the following 3 levels:

Sound(0) = SoundDirection(0)+SoundDirection(1)
Sound(1) = SoundDirection(2)+SoundDirection(3)
Sound(2) = SoundDirection(4)+SoundDirection(5)

So, the database above could be represented as the following:

Sound = [one number in between 0 and 2] // everytime SoundDirection has values 0 or 1, Sound has value 0. When SoundDirection has value 2 or 3, Sound has value 1. When SoundDirection has value 4 or 5, Sound has value 2
Score = [one number between 1 and 5] // perceived strength of the presented stimulus    

And would transform into the following dataset.

Participant(n) = [[Sound, Score], ...] //original SoundDirection

Participant(0) = [ [0,4] //0 [2,2] //4 [0,4] //1 [2,3] //5 [1,3] //2 ... ]

Participant(1) = [ [1,4] //3 [0,4] //1 [1,1] //2 [1,2] //2 [2,1] //5 ... ]

...

I would like to test the main effect in my model for both variables, and run a post-hoc pairwise analysis for both cases. I am unsure whether I the best strategy would be to create two separate analyses and models like the following:

model = lmer(score ~ SoundDirection + (1|participant), data = datasheet)
analysis = lsmeans(model, pairwise ~ SoundDirection)

model = lmer(score ~ Sound + (1|participant), data = datasheet) analysis = lsmeans(model, pairwise ~ Sound)

The above strategy already provide significant differences among the two analyses, and provide different types of complementary insight. However, I am not sure whether such approach may be accepted by reviewers in an article, and I am wondering whether the best strategy should consist in creating a unique model with two post-hoc analyses, as it follows:

model = lmer(score ~ SoundDirection + (1|participant), data = datasheet)
analysis_1 = lsmeans(model, pairwise ~ SoundDirection)
analysis_2 = lsmeans(model, pairwise ~ SoundDirection[sub-grouping])

Specifically, in the second case I am stuck as I don't know what function in R would allow me to tell the pairwise function to consider sub-groups of SoundDirection -> "pairwise ~ SoundDirection[sub-groups]"

Can anybody help me with both questions?

  • Can you provide some example SoundDirection and Sound? I'm not sure what the description "The variable contains 6 levels, each one scored in a Likert scale from 1 to 5." actually means. Especially since in your code snippet SoundDirection seems to be a single factor(?) variable. – dipetkov May 14 '22 at 11:03
  • Hi @dipetkov, I have tried to provide some insight on the database structure. Does the above clarify the type of data at disposal? – TakeMeToTheMoon May 15 '22 at 11:47
  • Thanks for adding examples, they are helpful. Could you also show the Sound variable for participants 1 and 2? – dipetkov May 15 '22 at 11:55
  • @dipetkov, I tried to be as clear as possible. Hope that can be understandable – TakeMeToTheMoon May 15 '22 at 12:12
  • Your first question is really about what's best practice in your field of study. On the second question, it's possible to add a grouping but only if the variable is a factor. That is, factor levels can be nested. Whether this makes sense depends on whether it is meaningful to treat SoundDirection as a factor with 6 levels. – dipetkov May 15 '22 at 12:32
  • @dipetkov, all the variables have been included by using the "factor()" function, if that is what you meant. Just to know, would you be able to tell me how to add that kind of grouping in R, so at least I try or find some relevant material online to study it? – TakeMeToTheMoon May 15 '22 at 12:41
  • Yes, I show how to do factor grouping using the emmeans package in this answer. Caveat: It's for a different type of regression model. – dipetkov May 15 '22 at 12:47
  • Use the function add_grouping in theemmeans package. You have to apply it to just the first part of analysis (not to the pairwise comparison part) – Russ Lenth May 15 '22 at 12:50
  • @RussLenth, is it possible that my add_grouping results into the following error? Error in add_grouping(model, Sound, SoundDirection, c("1", "0", "2", : no slot of name "model.info" for this object of class "lmerModLmerTest" – TakeMeToTheMoon May 15 '22 at 13:54
  • You apply it to an emmsans() result, not to the model object. – Russ Lenth May 15 '22 at 13:58
  • I had the same problem with the emmeans() result, that gives: "Error in add_grouping(transitions.analysis, Sound, SoundDirection, : trying to get slot "model.info" from an object (class "emm_list") that is not an S4 object". I tried @dipetkov's ref_grid(), which works with the "model" but not with the "emmeans() result" – TakeMeToTheMoon May 15 '22 at 14:01
  • Copy pasting error messages in comments is hardly helpful for debugging your code. What you'd need to do is to provide a minimal reproducible example as explained here: https://stackoverflow.com/help/mcve. This includes data (either your actual data or simulated data). In R this is particularly easy with the reprex addin. – dipetkov May 15 '22 at 14:15
  • Thank you @dipetkov. Then I will just add another CrossValidated question, so to separate different topics – TakeMeToTheMoon May 15 '22 at 14:19
  • Forgot to mention -- question purely about programming & debugging are better suited to Stack Overflow and usually get closed on Cross Validated. – dipetkov May 15 '22 at 14:22
  • Try it with analysis[[1]] – Russ Lenth May 15 '22 at 15:39
  • @RussLenth, that worked! Thank you! – TakeMeToTheMoon May 15 '22 at 16:35

0 Answers0