In the case of multicollinearity, I wonder why:
- We typically talk about lack of it as an assumption (thus, we assume non-multicollinearity):
https://www.statology.org/multiple-linear-regression-assumptions/
https://www.linkedin.com/pulse/4-main-assumptions-multi-linear-regression-ritik-karir
- Sometimes, we even use the word "test" for the Variance Inflation Factor:
https://kandadata.com/non-multicollinearity-test-in-multiple-linear-regression/
If we talk about non-perfect multicollinearity, this can be easily checked (and, in case the "assumption" is not met, we are forced to remove at least 1 predictor, or let the software programme do so). So, I can't see it as an assumption, but as something we can verify. If we talk about the problem of correlated predictors leading to correlation between parameter estimates, I see it as something on a continuum, while an assumption is something describing a precise situation (for example, independence, equal variance or normality), any deviation from which leading to failure of such assumption (then, of course, there can be robustness, so that results are still reliable even in case of deviations from the assumption, but that's another story).
In my view, tests refer to p-values (in the frequentist case) or probability of the null hypothesis (in the bayesian case). Also, tests use a sample to refer to an underlying population. The VIF doesn't perform a frequentist nor a bayesian test, and it just describes the correlation structure you have among your predictors in your sample, without doing any inference on it in the population. This is because the VIF flags an issue you may have by performing a regression in your sample, regardless of what would happen if you observed the whole population.