While listening to a talk I heard a researcher speak about the following approach to doing a multiple regression in the context of an observational study:
Subjectively assess all possible predictors and interactions for how well he thinks they will predict the outcome variable. Based on these considerations, rank the predictors/interactions.
Subjectively decide on a cutoff beyond which the predictors/interactions seem sufficiently unimportant that he doesn't want to include them.
Both '1' and '2' are done without data snooping, i.e. without actually running any analyses or looking at the data beyond the labels of the variables.
When asked why he takes this approach he says that if he adds too many predictors/interactions it makes it less likely that the preceding ones (i.e. the ones he feels are most subjectively important) will be statistically significant.
I'm somewhat interested in commentary on whether this is a sensible way of selecting predictors for a regression model, on whether such a strong emphasis on statistical significance is wise, etc.
However, what I'm most interested in is whether this researcher's claim is true. Is it true that all else being equal including one more predictor or one more interaction will make it less likely that the previously-added predictors/interactions will be statistically significant? Why/why not?
You can assume for the sake of argument that whatever predictors are selected there isn't going to be any serious problem with multicollinearity.