Imagine that predictor A has a positive relationship with the dependent variable
Can a predictor ever genuinely switch signs like in the example given
The switch is genuine when the final sign is the actual true sign.
You seem to be asking for a case where some predictor A correlates positively with the dependent variable, but it's 'true' sign is negative.
Of course this situation can happen in any model with multiple variables where the negative sign only emerges after including other variables.
An example happens when we have some model
$Y = -a X_1 + b X_2 + \epsilon$
and make $X_2$ positively correlated to $X_1$. In that case the true sign for $X_1$ is negative, but without $X_2$ the variable $X_1$ will take over the role of $X_2$ and gets potentially fitted with a positive parameter (if $b>a$ and the correlation is high).
R-code example:
set.seed(1)
n = 100
create two correlated variables as features
X = MASS::mvrnorm(n,
mu = c(0,0),
Sigma = matrix(c(1, 0.9,
0.9,1 ), 2))
x1 = X[,1]
x2 = X[,2]
create some dependent variable based on a negative parameter with added noise
y = -1 * x1 + 2 * x2 + rnorm(n)
fit two models
one with and one without feature X2
lm(y ~ 1 + x1)
lm(y ~ 1 + x1 + x2)
output of the two lines above
Coefficients:
(Intercept) x1
0.01794 0.80726
Coefficients:
(Intercept) x1 x2
0.02535 -0.86962 1.89127
which shows a clear change of sign for the 'x1' parameter estimate
and because we know the true model, we know that this sign change is genuine
Are there ways to tell whether a sign flip is genuine or a symptom of multicollinearity?
The above situation shows a sign change that is genuine. I wonder about situations where a sign change, after adding extra predictors, would not be genuine.
The typical situation is that adding extra predictors is making the model more accurate and more 'genuine'.
An interesting related case are the changes in parameter estimates that might occur in regularised regression like the question Why under joint least squares direction is it possible for some coefficients to decrease in LARS regression? which discusses decreases in the magnitudes of paramtere estimates when regularisation is reduced. Extreme situations occur when parameter estimates decrease in such extent that the change signs, which relates to this question Geometrical interpretation of why can't ridge regression shrink coefficients to 0? where some of the parameter estimates cross zero for particular intermediate degrees of regularisation.
This makes it difficult to 'tell' whether a parameter estimate (sign change) is genuine. In the complex situations where this situation might occur, this is often less of a problem because the regression is performed for reasons of prediction and less for reasons of hypothesis testing or analysis of theoretical models. On the other hand, the problems of sign changes may occur already very easily when there areodel misspecifications like the situation here: A misspecification error with linear models that can complete reverse the direction of an effect, has this been described, has this a name?