I have the following regression outputs from a model that includes both quadratic and cubic interaction terms. I calculated the simple slopes using the simle_slope from reghelper.
x1 <- rnorm(100)
x2 <- rnorm(100)
x3 <- rnorm(100)
x4 <- rnorm(100)
y <- x1 + x2 + x2*2 + x1x2 + x3 + x4 + rnorm(100)
fit <- lm(y ~ x1 + x2 + I(x2^2) + I(x2^3) + x1:x2 + x1:I(x2^2) +
x1:I(x2^3) + x3 + x4)
summary(fit)
library(reghelper)
test2 <- simple_slopes(fit, level = list(x2 = c(0,1,2,3), confint = T))
test2$L95 <- test2$Test Estimate-1.96test2$Std. Error
test2$U95 <- test2$Test Estimate+1.96test2$Std. Error
test2
test2
x1 x2 Test Estimate Std. Error t value df Pr(>|t|) L95 U95 Sig.
1 sstest 0 1.1733 0.1642 7.1437 92 2.085e-10 0.8514158 1.495274 ***
2 sstest 1 2.0379 0.1895 10.7562 92 < 2.2e-16 1.6665505 2.409244 ***
3 sstest 2 2.7164 0.8014 3.3896 92 0.001033 1.1456684 4.287227 **
4 sstest 3 3.2395 2.6804 1.2086 92 0.229911 -2.0140209 8.493070
I understand simple slope can tell us whether or not a conditional effect based on a specific value of moderator is statistically different from zero.
My question is, the confidence interval (L95, U95) of the simple slope of x1 when x2 = 0,1 is not overlapped. Can I say there is a statistically significant difference in the simple slopes of x1 when x2 = (0,1)? In other words, can I tell the simple slopes of x1 when x2 = 0,1 are significantly different from each other?