0

I apologize in advance if my question is incomprehensible, I am very new to this topic.

My data consists of 6 dependent variables (DV1-6) and 5 independent variables (IV1-5) of interest. For reasons that are not important here, I have to split the data in two groups, A and B. For each group, I conduct a multiple multivariate linear regression, which is, for the sake of simplicity, equivalent to 6 multiple regression models, one for each dependent variable, and with all 5 predictors. My hypotheses relate to multiple dependent variables at once. To give an example:

H1: Predictor IV1 has a positive effect on the dependent variables DV1 – DV4.

H2: Predictor IV2 has no effect on the dependent variables DV1 – DV6.

Since that means that I am conducting multiple tests, I reported BH-adjusted p-values according to the procedure as described by Benjamini, Heller, Yekutieli (2009); I used the p.adjust(method=”BH”)-function from the R stats package to adjust all p-values from every predictor in all models together.

A reviewer has understandably noted that H2 predicts a null result, and that I should use 90%-confidence intervals to argue against meaningful effect (as suggested by Rainey, 2004 or Weber & Popova, 2012).

My problem now is that I am unsure whether I can report unadjusted 90%-CIs together with adjusted p-values. I have researched that there are ways in R to correct CIs for the familywise error rate (i.e., with the confint_adjust() function included in the api2lm package), but not for the FDR. Benjamini & Yekutieli (2005) propose a procedure for False Discovery Rate-Adjusted Multiple Confidence Intervals (application in Rosenblatt & Benjamini, 2014). However, if I understand correctly, they argue that this is only necessary if the confidence intervals are only constructed for a selection of parameters (e.g. only for those whose p-values are below the specified threshold). However, if CIs are constructed for all parameters, the FDR is controlled.
I am now unsure what this means for my study: In the specific case (based on the adjusted p-values), predictor IV2 has no effect on 5 of the 6 dependent variables. If I now calculate CIs on this basis for exactly these 5 DVs for the parameter estimation, this is probably a selection; then I would have to construct the CIs according to this formula (Rosenblatt & Benjamini, 2014, p. 403): enter image description here

However, if I were to construct confidence intervals for all 6 DVs for predictor IV2 - is this also a selection because it is only one of 5 predictors in total, or is this irrelevant because only those predictors are relevant for the hypothesis in question? In the latter case, regular confidence intervals would suffice according to the aforementioned studies. But isn't it strange to report these together with BH-adjusted p-values?

Similar problems are discussed here, here,and here, but I am not sure how to transfer the answers to my analysis.

References:

Benjamini, Y., & Yekutieli, D. (2005). False Discovery Rate–Adjusted Multiple Confidence Intervals for Selected Parameters. Journal of the American Statistical Association. https://doi.org/10.1198/016214504000001907

Benjamini, Y., Heller, R., & Yekutieli, D. (2009). Selective inference in complex research. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906), 4255–4271. https://doi.org/10.1098/rsta.2009.0127 Rainey, C. (2014). Arguing for a Negligible Effect. American Journal of Political Science, 58(4), 1083–1091.

Rosenblatt, J. D., & Benjamini, Y. (2014). Selective correlations; not voodoo. NeuroImage, 103, 401–410. https://doi.org/10.1016/j.neuroimage.2014.08.023

Weber, R., & Popova, L. (2012). Testing Equivalence in Communication Research: Theory and Application. Communication Methods and Measures, 6(3), 190–213. https://doi.org/10.1080/19312458.2012.703834

0 Answers0