There are many great answers on CrossValidated (e.g., HERE and HERE) regarding why and how regression coefficients are partial coefficients controlling for/holding constant "other predictors".
The answers cited above all say controlling the "other predictors" does NOT mean holding/FIXING other predictors at a particular value for them, though.
But when I do a regression with $2$ centered predictors (time_c and parent_c), I clearly see that partial coef. of each is obtained by FIXING the other predictor at 0 (its mean as they are centered).
Question: So, can somebody help me clarify this conflict in my mind?
library(tidyverse)
library(interactions)
data <- read.csv('https://raw.githubusercontent.com/rnorouzian/e/master/math.csv')
data <- mutate(data, time_c = time_hw - mean(time_hw), parent_c = pare_inv - mean(pare_inv))
m4 <- lm(math ~ time_c*parent_c, data = data)
summary(m4) ## partial coef. of time_c is .97 (holding par_inv constant but
# NOT at a particular value)
Est. S.E. t val. p
(Intercept) 25.89 0.65 40.04 0.00
time_c 0.97 0.14 7.12 0.00
parent_c 2.62 0.64 4.08 0.00
time_c:parent_c -0.47 0.14 -3.48 0.00
Check the simple slope of "time_c" corresponds to the summary(m4) partial coef. for time_c
sim_slopes(m4, pred = time_c, modx = parent_c, modx.values = 0, john = F)
SIMPLE SLOPES ANALYSIS
Slope of time_c when parent_c = 0.00:
Est. S.E. t val. p
0.97 0.14 7.12 0.00
BUT Simple slope of time_c in the summary(m4) table earlier in fact is possible
when parent_c is FIXED at 0 as demonstrated by simple slope outtput.