A friend recently told me about a technique to remove the effects of an unwanted feature $x$ in a response variable $y$. He mentioned an example from genetics. By regressing $y$ on $x$ (or a polynomial in $x$ to capture higher order interactions in the data), the residuals represent the variation unexplained by the "nuisance" predictors. As a simple example, say you wanted to remove the effects of tire pressure and car age (and their interactions) on gas milage performance. The errors soak up all of the information that is left after explaining away tire pressure & car age. It's not hard to see how this kind of approach would be useful for reducing large datasets, like the ones you'd see in genetics, to key features.
This approach flips conventional wisdom about regression on its head. I was wondering: Is this is a common practice? Are there any examples where it has been applied or, better yet, any theory on the subject? From what I can tell, it seems somewhat unique to genetic data (since noisiness in the observations makes fitting a full linear model difficult).